Sep 4 17:40:35.043495 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:40:35.043524 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:35.043533 kernel: BIOS-provided physical RAM map: Sep 4 17:40:35.043542 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:40:35.043548 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:40:35.043555 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:40:35.043564 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:40:35.043573 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:40:35.043582 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:40:35.043588 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:40:35.043596 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:40:35.043610 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:40:35.043619 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:40:35.043627 kernel: NX (Execute Disable) protection: active Sep 4 17:40:35.043640 kernel: APIC: Static calls initialized Sep 4 17:40:35.043648 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:40:35.043657 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 4 17:40:35.043668 kernel: SMBIOS 3.1.0 present. Sep 4 17:40:35.043675 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:40:35.043684 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:40:35.043692 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:40:35.043700 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:40:35.043709 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:40:35.043716 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:40:35.043727 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:40:35.043734 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:40:35.043745 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:40:35.043754 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:40:35.047786 kernel: tsc: Detected 2593.905 MHz processor Sep 4 17:40:35.047797 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:40:35.047805 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:40:35.047812 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:40:35.047819 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:40:35.047832 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:40:35.047839 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:40:35.047846 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:40:35.047853 kernel: Using GB pages for direct mapping Sep 4 17:40:35.047863 kernel: Secure boot disabled Sep 4 17:40:35.047871 kernel: ACPI: Early table checksum verification disabled Sep 4 17:40:35.047881 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:40:35.047894 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047906 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047916 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:40:35.047924 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:40:35.047934 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047942 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047950 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047962 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047969 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047980 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047987 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047997 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:40:35.048005 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:40:35.048014 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:40:35.048023 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:40:35.048034 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:40:35.048043 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:40:35.048050 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:40:35.048061 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:40:35.048068 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:40:35.048078 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:40:35.048086 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:40:35.048096 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:40:35.048104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:40:35.048115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:40:35.048123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:40:35.048131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:40:35.048139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:40:35.048146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:40:35.048157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:40:35.048164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:40:35.048172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:40:35.048179 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:40:35.048189 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:40:35.048197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:40:35.048207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:40:35.048214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:40:35.048225 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:40:35.048234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:40:35.048243 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:40:35.048260 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:40:35.048271 kernel: Zone ranges: Sep 4 17:40:35.048283 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:40:35.048290 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:40:35.048298 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:40:35.048307 kernel: Movable zone start for each node Sep 4 17:40:35.048314 kernel: Early memory node ranges Sep 4 17:40:35.048321 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:40:35.048332 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:40:35.048340 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:40:35.048350 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:40:35.048362 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:40:35.048372 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:40:35.048382 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:40:35.048390 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:40:35.048398 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:40:35.048408 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:40:35.048416 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:40:35.048425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:40:35.048433 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:40:35.048445 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:40:35.048452 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:40:35.048462 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:40:35.048470 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:40:35.048480 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:40:35.048488 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:40:35.048498 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:40:35.048506 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:40:35.048516 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:40:35.048525 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:40:35.048535 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:40:35.048544 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:35.048555 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:40:35.048562 kernel: random: crng init done Sep 4 17:40:35.048572 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:40:35.048580 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:40:35.048589 kernel: Fallback order for Node 0: 0 Sep 4 17:40:35.048600 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:40:35.048617 kernel: Policy zone: Normal Sep 4 17:40:35.048625 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:40:35.048637 kernel: software IO TLB: area num 2. Sep 4 17:40:35.048646 kernel: Memory: 8077068K/8387460K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 310132K reserved, 0K cma-reserved) Sep 4 17:40:35.048656 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:40:35.048665 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:40:35.048681 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:40:35.048691 kernel: Dynamic Preempt: voluntary Sep 4 17:40:35.048702 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:40:35.048714 kernel: rcu: RCU event tracing is enabled. Sep 4 17:40:35.048725 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:40:35.048737 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:40:35.048745 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:40:35.048756 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:40:35.048775 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:40:35.048787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:40:35.048797 kernel: Using NULL legacy PIC Sep 4 17:40:35.048806 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:40:35.048816 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:40:35.048825 kernel: Console: colour dummy device 80x25 Sep 4 17:40:35.048835 kernel: printk: console [tty1] enabled Sep 4 17:40:35.048843 kernel: printk: console [ttyS0] enabled Sep 4 17:40:35.048853 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:40:35.048862 kernel: ACPI: Core revision 20230628 Sep 4 17:40:35.048872 kernel: Failed to register legacy timer interrupt Sep 4 17:40:35.048883 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:40:35.048893 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:40:35.048902 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:40:35.048912 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:40:35.048921 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:40:35.048931 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:40:35.048940 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:40:35.048950 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:40:35.048958 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:40:35.048970 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Sep 4 17:40:35.048980 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:40:35.048989 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:40:35.048998 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:40:35.049008 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:40:35.049017 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:40:35.049032 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:40:35.049043 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:40:35.049054 kernel: RETBleed: Vulnerable Sep 4 17:40:35.049067 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:40:35.049076 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:40:35.049087 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:40:35.049095 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:40:35.049106 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:40:35.049114 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:40:35.049124 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:40:35.049133 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:40:35.049142 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:40:35.049152 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:40:35.049160 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:40:35.049172 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:40:35.049181 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:40:35.049190 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:40:35.049200 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:40:35.049209 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:40:35.049218 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:40:35.049228 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:40:35.049236 kernel: landlock: Up and running. Sep 4 17:40:35.049247 kernel: SELinux: Initializing. Sep 4 17:40:35.049255 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:40:35.049265 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:40:35.049274 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:40:35.049286 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:35.049295 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:35.049305 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:35.049314 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:40:35.049324 kernel: signal: max sigframe size: 3632 Sep 4 17:40:35.049333 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:40:35.049343 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:40:35.049352 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:40:35.049362 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:40:35.049373 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:40:35.049387 kernel: .... node #0, CPUs: #1 Sep 4 17:40:35.049398 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:40:35.049410 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:40:35.049421 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:40:35.049430 kernel: smpboot: Max logical packages: 1 Sep 4 17:40:35.049441 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 4 17:40:35.049449 kernel: devtmpfs: initialized Sep 4 17:40:35.049462 kernel: x86/mm: Memory block size: 128MB Sep 4 17:40:35.049471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:40:35.049482 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:40:35.049490 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:40:35.049501 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:40:35.049509 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:40:35.049520 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:40:35.049528 kernel: audit: type=2000 audit(1725471633.027:1): state=initialized audit_enabled=0 res=1 Sep 4 17:40:35.050285 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:40:35.050309 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:40:35.050326 kernel: cpuidle: using governor menu Sep 4 17:40:35.050339 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:40:35.050353 kernel: dca service started, version 1.12.1 Sep 4 17:40:35.050366 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:40:35.050381 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:40:35.050396 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:40:35.050409 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:40:35.050423 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:40:35.050441 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:40:35.050455 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:40:35.050470 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:40:35.050483 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:40:35.050498 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:40:35.050514 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:40:35.050529 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:40:35.050545 kernel: ACPI: Interpreter enabled Sep 4 17:40:35.050559 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:40:35.050576 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:40:35.050591 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:40:35.050606 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:40:35.050620 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:40:35.050634 kernel: iommu: Default domain type: Translated Sep 4 17:40:35.050648 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:40:35.050663 kernel: efivars: Registered efivars operations Sep 4 17:40:35.050677 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:40:35.050692 kernel: PCI: System does not support PCI Sep 4 17:40:35.050710 kernel: vgaarb: loaded Sep 4 17:40:35.050724 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:40:35.050739 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:40:35.050754 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:40:35.051808 kernel: pnp: PnP ACPI init Sep 4 17:40:35.051826 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:40:35.051842 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:40:35.051857 kernel: NET: Registered PF_INET protocol family Sep 4 17:40:35.051873 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:40:35.051892 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:40:35.051907 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:40:35.051922 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:40:35.051937 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:40:35.051952 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:40:35.051968 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:40:35.051983 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:40:35.051997 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:40:35.052012 kernel: NET: Registered PF_XDP protocol family Sep 4 17:40:35.052030 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:40:35.052046 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:40:35.052061 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 4 17:40:35.052076 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:40:35.052092 kernel: Initialise system trusted keyrings Sep 4 17:40:35.052107 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:40:35.052122 kernel: Key type asymmetric registered Sep 4 17:40:35.052136 kernel: Asymmetric key parser 'x509' registered Sep 4 17:40:35.052151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:40:35.052169 kernel: io scheduler mq-deadline registered Sep 4 17:40:35.052184 kernel: io scheduler kyber registered Sep 4 17:40:35.052199 kernel: io scheduler bfq registered Sep 4 17:40:35.052214 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:40:35.052230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:40:35.052244 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:40:35.052259 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:40:35.052274 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:40:35.052446 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:40:35.052572 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:40:34 UTC (1725471634) Sep 4 17:40:35.052684 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:40:35.052703 kernel: intel_pstate: CPU model not supported Sep 4 17:40:35.052718 kernel: efifb: probing for efifb Sep 4 17:40:35.052733 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:40:35.052749 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:40:35.054790 kernel: efifb: scrolling: redraw Sep 4 17:40:35.054810 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:40:35.054830 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:40:35.054846 kernel: fb0: EFI VGA frame buffer device Sep 4 17:40:35.054862 kernel: pstore: Using crash dump compression: deflate Sep 4 17:40:35.054878 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:40:35.054893 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:40:35.054908 kernel: Segment Routing with IPv6 Sep 4 17:40:35.054923 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:40:35.054939 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:40:35.054954 kernel: Key type dns_resolver registered Sep 4 17:40:35.054972 kernel: IPI shorthand broadcast: enabled Sep 4 17:40:35.054987 kernel: sched_clock: Marking stable (761002900, 38464100)->(957413600, -157946600) Sep 4 17:40:35.055003 kernel: registered taskstats version 1 Sep 4 17:40:35.055018 kernel: Loading compiled-in X.509 certificates Sep 4 17:40:35.055033 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:40:35.055048 kernel: Key type .fscrypt registered Sep 4 17:40:35.055063 kernel: Key type fscrypt-provisioning registered Sep 4 17:40:35.055078 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:40:35.055096 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:40:35.055111 kernel: ima: No architecture policies found Sep 4 17:40:35.055126 kernel: clk: Disabling unused clocks Sep 4 17:40:35.055141 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:40:35.055156 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:40:35.055172 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:40:35.055187 kernel: Run /init as init process Sep 4 17:40:35.055202 kernel: with arguments: Sep 4 17:40:35.055217 kernel: /init Sep 4 17:40:35.055232 kernel: with environment: Sep 4 17:40:35.055250 kernel: HOME=/ Sep 4 17:40:35.055264 kernel: TERM=linux Sep 4 17:40:35.055279 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:40:35.055297 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:40:35.055315 systemd[1]: Detected virtualization microsoft. Sep 4 17:40:35.055332 systemd[1]: Detected architecture x86-64. Sep 4 17:40:35.055348 systemd[1]: Running in initrd. Sep 4 17:40:35.055365 systemd[1]: No hostname configured, using default hostname. Sep 4 17:40:35.055381 systemd[1]: Hostname set to . Sep 4 17:40:35.055397 systemd[1]: Initializing machine ID from random generator. Sep 4 17:40:35.055413 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:40:35.055430 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:40:35.055445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:40:35.055462 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:40:35.055483 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:40:35.055501 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:40:35.055518 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:40:35.055536 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:40:35.055552 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:40:35.055568 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:40:35.055584 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:40:35.055601 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:40:35.055619 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:40:35.055635 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:40:35.055652 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:40:35.055668 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:40:35.055684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:40:35.055700 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:40:35.055716 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:40:35.055732 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:40:35.055746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:40:35.055794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:40:35.055810 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:40:35.055826 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:40:35.055841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:40:35.055857 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:40:35.055873 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:40:35.055888 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:40:35.055904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:40:35.055923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:35.055939 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:40:35.055978 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:40:35.056015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:40:35.056035 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:40:35.056070 systemd-journald[176]: Journal started Sep 4 17:40:35.056107 systemd-journald[176]: Runtime Journal (/run/log/journal/71116bac6a4d4aada43a40c7b85d6b95) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:40:35.061558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:40:35.033965 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:40:35.067773 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:40:35.073920 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:40:35.080701 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:40:35.093773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:40:35.093889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:40:35.101265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:35.110222 kernel: Bridge firewalling registered Sep 4 17:40:35.105443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:40:35.113831 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:40:35.118945 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:35.124636 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:40:35.127363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:40:35.140992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:40:35.143642 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:35.151916 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:40:35.161263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:40:35.170958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:40:35.176978 dracut-cmdline[209]: dracut-dracut-053 Sep 4 17:40:35.180127 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:35.228524 systemd-resolved[215]: Positive Trust Anchors: Sep 4 17:40:35.228538 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:40:35.228593 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:40:35.251714 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 4 17:40:35.254980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:40:35.259610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:40:35.276776 kernel: SCSI subsystem initialized Sep 4 17:40:35.286780 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:40:35.297775 kernel: iscsi: registered transport (tcp) Sep 4 17:40:35.318199 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:40:35.318258 kernel: QLogic iSCSI HBA Driver Sep 4 17:40:35.352813 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:40:35.360908 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:40:35.387193 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:40:35.387246 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:40:35.390158 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:40:35.429784 kernel: raid6: avx512x4 gen() 18599 MB/s Sep 4 17:40:35.448778 kernel: raid6: avx512x2 gen() 18801 MB/s Sep 4 17:40:35.467771 kernel: raid6: avx512x1 gen() 18697 MB/s Sep 4 17:40:35.485772 kernel: raid6: avx2x4 gen() 18732 MB/s Sep 4 17:40:35.504775 kernel: raid6: avx2x2 gen() 18739 MB/s Sep 4 17:40:35.524129 kernel: raid6: avx2x1 gen() 14078 MB/s Sep 4 17:40:35.524158 kernel: raid6: using algorithm avx512x2 gen() 18801 MB/s Sep 4 17:40:35.544470 kernel: raid6: .... xor() 30413 MB/s, rmw enabled Sep 4 17:40:35.544515 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:40:35.566779 kernel: xor: automatically using best checksumming function avx Sep 4 17:40:35.718782 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:40:35.727820 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:40:35.735919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:40:35.748927 systemd-udevd[395]: Using default interface naming scheme 'v255'. Sep 4 17:40:35.753326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:40:35.764943 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:40:35.777011 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Sep 4 17:40:35.801527 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:40:35.808877 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:40:35.846984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:40:35.856945 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:40:35.873879 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:40:35.880055 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:40:35.880143 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:40:35.880583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:40:35.903058 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:40:35.918712 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:40:35.941783 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:40:35.950217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:40:35.950466 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:35.963081 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:35.968499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:35.968901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:35.973702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:35.982974 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:40:35.983009 kernel: AES CTR mode by8 optimization enabled Sep 4 17:40:35.986778 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:40:35.991100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:35.999189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:35.999267 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:36.011503 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:40:36.011530 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:40:36.024775 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:40:36.035304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:36.039749 kernel: PTP clock support registered Sep 4 17:40:36.039787 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:40:36.048337 kernel: scsi host1: storvsc_host_t Sep 4 17:40:36.048398 kernel: scsi host0: storvsc_host_t Sep 4 17:40:36.055802 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:40:36.057027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:36.062796 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:40:36.066110 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:40:36.069785 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:40:36.074958 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:36.094506 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:40:36.101540 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:40:36.101574 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:40:36.107836 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:40:36.111291 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:40:36.111343 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:40:36.111913 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:35.968618 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:40:35.980196 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:40:35.980218 systemd-journald[176]: Time jumped backwards, rotating. Sep 4 17:40:35.983467 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:40:35.971180 systemd-resolved[215]: Clock change detected. Flushing caches. Sep 4 17:40:35.997337 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:40:35.997562 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:40:35.999282 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:40:36.012918 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:40:36.013130 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:40:36.018282 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:40:36.018539 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:40:36.018766 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:40:36.024567 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:40:36.024616 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:40:36.050333 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: VF slot 1 added Sep 4 17:40:36.059275 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:40:36.065529 kernel: hv_pci 64d5fa41-863d-4a4f-a194-0f3a3f11c2e9: PCI VMBus probing: Using version 0x10004 Sep 4 17:40:36.065707 kernel: hv_pci 64d5fa41-863d-4a4f-a194-0f3a3f11c2e9: PCI host bridge to bus 863d:00 Sep 4 17:40:36.070490 kernel: pci_bus 863d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:40:36.073189 kernel: pci_bus 863d:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:40:36.077332 kernel: pci 863d:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:40:36.082271 kernel: pci 863d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:40:36.085595 kernel: pci 863d:00:02.0: enabling Extended Tags Sep 4 17:40:36.096298 kernel: pci 863d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 863d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:40:36.101526 kernel: pci_bus 863d:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:40:36.101784 kernel: pci 863d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:40:36.274581 kernel: mlx5_core 863d:00:02.0: enabling device (0000 -> 0002) Sep 4 17:40:36.278276 kernel: mlx5_core 863d:00:02.0: firmware version: 14.30.1284 Sep 4 17:40:36.495343 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: VF registering: eth1 Sep 4 17:40:36.495655 kernel: mlx5_core 863d:00:02.0 eth1: joined to eth0 Sep 4 17:40:36.500310 kernel: mlx5_core 863d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:40:36.508274 kernel: mlx5_core 863d:00:02.0 enP34365s1: renamed from eth1 Sep 4 17:40:36.584392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:40:36.662295 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (441) Sep 4 17:40:36.677050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:40:36.686277 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (447) Sep 4 17:40:36.704413 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:40:36.707620 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:40:36.722222 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:40:36.734398 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:40:37.755360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:40:37.755431 disk-uuid[602]: The operation has completed successfully. Sep 4 17:40:37.834572 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:40:37.834684 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:40:37.850449 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:40:37.855610 sh[718]: Success Sep 4 17:40:37.888552 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:40:38.084762 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:40:38.096381 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:40:38.101065 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:40:38.117541 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:40:38.117591 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:38.120833 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:40:38.123342 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:40:38.125543 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:40:38.421990 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:40:38.422959 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:40:38.431507 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:40:38.437235 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:40:38.452403 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:38.452449 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:38.454797 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:40:38.475286 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:40:38.484786 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:40:38.491159 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:38.497990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:40:38.509492 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:40:38.525045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:40:38.534479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:40:38.552890 systemd-networkd[902]: lo: Link UP Sep 4 17:40:38.552898 systemd-networkd[902]: lo: Gained carrier Sep 4 17:40:38.555001 systemd-networkd[902]: Enumeration completed Sep 4 17:40:38.555229 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:40:38.557175 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:38.557179 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:40:38.558690 systemd[1]: Reached target network.target - Network. Sep 4 17:40:38.616279 kernel: mlx5_core 863d:00:02.0 enP34365s1: Link up Sep 4 17:40:38.644372 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: Data path switched to VF: enP34365s1 Sep 4 17:40:38.644690 systemd-networkd[902]: enP34365s1: Link UP Sep 4 17:40:38.644860 systemd-networkd[902]: eth0: Link UP Sep 4 17:40:38.645066 systemd-networkd[902]: eth0: Gained carrier Sep 4 17:40:38.645082 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:38.649520 systemd-networkd[902]: enP34365s1: Gained carrier Sep 4 17:40:38.680308 systemd-networkd[902]: eth0: DHCPv4 address 10.200.4.29/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 4 17:40:39.444390 ignition[877]: Ignition 2.19.0 Sep 4 17:40:39.444402 ignition[877]: Stage: fetch-offline Sep 4 17:40:39.444450 ignition[877]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:39.444462 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:39.444584 ignition[877]: parsed url from cmdline: "" Sep 4 17:40:39.444589 ignition[877]: no config URL provided Sep 4 17:40:39.444596 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:40:39.444607 ignition[877]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:40:39.444613 ignition[877]: failed to fetch config: resource requires networking Sep 4 17:40:39.446345 ignition[877]: Ignition finished successfully Sep 4 17:40:39.463562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:40:39.472445 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:40:39.488668 ignition[912]: Ignition 2.19.0 Sep 4 17:40:39.488678 ignition[912]: Stage: fetch Sep 4 17:40:39.488893 ignition[912]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:39.488906 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:39.489004 ignition[912]: parsed url from cmdline: "" Sep 4 17:40:39.489008 ignition[912]: no config URL provided Sep 4 17:40:39.489012 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:40:39.489019 ignition[912]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:40:39.489038 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:40:39.567702 ignition[912]: GET result: OK Sep 4 17:40:39.567909 ignition[912]: config has been read from IMDS userdata Sep 4 17:40:39.568021 ignition[912]: parsing config with SHA512: c4359dd1f05fe3fa8bbc0d3cc6d6ab95c26b4d04667fbdf40041591ecaa7dfd75ce30c017acb26d85567721f82875f56308af38788bcbc65c0fb827340f2c2a5 Sep 4 17:40:39.573416 unknown[912]: fetched base config from "system" Sep 4 17:40:39.573433 unknown[912]: fetched base config from "system" Sep 4 17:40:39.573443 unknown[912]: fetched user config from "azure" Sep 4 17:40:39.576354 ignition[912]: fetch: fetch complete Sep 4 17:40:39.579786 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:40:39.576363 ignition[912]: fetch: fetch passed Sep 4 17:40:39.576427 ignition[912]: Ignition finished successfully Sep 4 17:40:39.593428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:40:39.609505 ignition[918]: Ignition 2.19.0 Sep 4 17:40:39.609515 ignition[918]: Stage: kargs Sep 4 17:40:39.609735 ignition[918]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:39.612806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:40:39.609747 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:39.610617 ignition[918]: kargs: kargs passed Sep 4 17:40:39.610659 ignition[918]: Ignition finished successfully Sep 4 17:40:39.629423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:40:39.643626 ignition[924]: Ignition 2.19.0 Sep 4 17:40:39.643636 ignition[924]: Stage: disks Sep 4 17:40:39.643853 ignition[924]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:39.643862 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:39.644710 ignition[924]: disks: disks passed Sep 4 17:40:39.644747 ignition[924]: Ignition finished successfully Sep 4 17:40:39.654092 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:40:39.656446 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:40:39.660808 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:40:39.663433 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:40:39.672403 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:40:39.676677 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:40:39.684486 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:40:39.741200 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:40:39.745147 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:40:39.756385 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:40:39.850276 kernel: EXT4-fs (sda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:40:39.851083 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:40:39.853619 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:40:39.893446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:40:39.901365 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:40:39.912284 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Sep 4 17:40:39.916273 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:39.916300 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:39.917852 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:40:39.923243 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:40:39.926269 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:40:39.927390 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:40:39.927421 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:40:39.936418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:40:39.940106 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:40:39.953412 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:40:40.498456 systemd-networkd[902]: eth0: Gained IPv6LL Sep 4 17:40:40.498815 systemd-networkd[902]: enP34365s1: Gained IPv6LL Sep 4 17:40:40.568303 coreos-metadata[945]: Sep 04 17:40:40.568 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:40:40.573405 coreos-metadata[945]: Sep 04 17:40:40.573 INFO Fetch successful Sep 4 17:40:40.575802 coreos-metadata[945]: Sep 04 17:40:40.573 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:40:40.581162 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:40:40.585026 coreos-metadata[945]: Sep 04 17:40:40.585 INFO Fetch successful Sep 4 17:40:40.587083 coreos-metadata[945]: Sep 04 17:40:40.585 INFO wrote hostname ci-4054.1.0-a-6fd622a1a5 to /sysroot/etc/hostname Sep 4 17:40:40.586756 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:40:40.632385 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:40:40.638716 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:40:40.643541 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:40:41.772796 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:40:41.781399 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:40:41.788417 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:40:41.797421 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:41.796320 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:40:41.821346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:40:41.828801 ignition[1061]: INFO : Ignition 2.19.0 Sep 4 17:40:41.828801 ignition[1061]: INFO : Stage: mount Sep 4 17:40:41.834926 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:41.834926 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:41.834926 ignition[1061]: INFO : mount: mount passed Sep 4 17:40:41.834926 ignition[1061]: INFO : Ignition finished successfully Sep 4 17:40:41.830737 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:40:41.842357 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:40:41.853109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:40:41.867276 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1073) Sep 4 17:40:41.867309 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:41.870271 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:41.874115 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:40:41.879274 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:40:41.880093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:40:41.901577 ignition[1089]: INFO : Ignition 2.19.0 Sep 4 17:40:41.901577 ignition[1089]: INFO : Stage: files Sep 4 17:40:41.905377 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:41.905377 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:41.905377 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:40:41.905377 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:40:41.905377 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:40:41.975008 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:40:41.979019 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:40:41.979019 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:40:41.975501 unknown[1089]: wrote ssh authorized keys file for user: core Sep 4 17:40:42.053041 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:40:42.057625 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:40:42.057625 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:40:42.057625 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:40:42.282801 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:40:42.330061 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:40:42.330061 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:40:42.338486 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:40:42.342205 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:40:42.345916 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:40:42.889018 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:40:43.236669 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:40:43.236669 ignition[1089]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 4 17:40:43.252381 ignition[1089]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: files passed Sep 4 17:40:43.257431 ignition[1089]: INFO : Ignition finished successfully Sep 4 17:40:43.254419 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:40:43.305560 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:40:43.311213 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:40:43.317652 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:40:43.319892 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:40:43.354753 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:43.354753 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:43.362419 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:43.367576 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:40:43.370780 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:40:43.381426 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:40:43.402502 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:40:43.402605 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:40:43.407790 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:40:43.412436 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:40:43.414598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:40:43.423664 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:40:43.437095 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:40:43.448494 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:40:43.459835 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:40:43.462387 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:40:43.469704 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:40:43.471780 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:40:43.471884 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:40:43.480973 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:40:43.485553 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:40:43.489586 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:40:43.492287 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:40:43.496970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:40:43.501983 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:40:43.506579 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:40:43.511428 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:40:43.518604 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:40:43.523106 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:40:43.523618 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:40:43.523746 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:40:43.531139 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:40:43.536368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:40:43.541143 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:40:43.543497 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:40:43.546466 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:40:43.551568 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:40:43.558154 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:40:43.558359 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:40:43.563403 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:40:43.563507 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:40:43.572055 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:40:43.572223 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:40:43.587573 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:40:43.589605 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:40:43.591355 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:40:43.595699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:40:43.601527 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:40:43.601869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:40:43.608752 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:40:43.608913 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:40:43.624480 ignition[1142]: INFO : Ignition 2.19.0 Sep 4 17:40:43.624480 ignition[1142]: INFO : Stage: umount Sep 4 17:40:43.624480 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:43.624480 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:43.614424 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:40:43.641621 ignition[1142]: INFO : umount: umount passed Sep 4 17:40:43.641621 ignition[1142]: INFO : Ignition finished successfully Sep 4 17:40:43.614511 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:40:43.626988 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:40:43.627099 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:40:43.633539 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:40:43.633640 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:40:43.637374 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:40:43.637431 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:40:43.641619 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:40:43.641671 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:40:43.666934 systemd[1]: Stopped target network.target - Network. Sep 4 17:40:43.667028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:40:43.667089 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:40:43.667766 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:40:43.668082 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:40:43.673072 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:40:43.677542 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:40:43.679520 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:40:43.696153 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:40:43.696216 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:40:43.700049 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:40:43.700099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:40:43.704508 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:40:43.704564 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:40:43.705384 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:40:43.705428 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:40:43.706348 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:40:43.706588 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:40:43.708125 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:40:43.720111 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:40:43.720223 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:40:43.720510 systemd-networkd[902]: eth0: DHCPv6 lease lost Sep 4 17:40:43.724769 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:40:43.724892 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:40:43.731214 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:40:43.732994 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:40:43.759473 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:40:43.763717 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:40:43.763811 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:40:43.768809 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:40:43.768859 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:40:43.777540 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:40:43.777603 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:40:43.782054 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:40:43.782110 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:40:43.787081 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:40:43.807872 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:40:43.808042 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:40:43.812789 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:40:43.812838 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:40:43.815452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:40:43.815487 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:40:43.815762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:40:43.815801 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:40:43.816507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:40:43.816542 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:40:43.817167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:40:43.817203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:43.852467 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:40:43.857668 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:40:43.865512 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: Data path switched from VF: enP34365s1 Sep 4 17:40:43.857729 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:40:43.863019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:43.863072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:43.872247 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:40:43.872346 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:40:43.892960 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:40:43.893081 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:40:44.174698 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:40:44.174856 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:40:44.179470 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:40:44.185575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:40:44.185663 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:40:44.197486 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:40:44.308078 systemd[1]: Switching root. Sep 4 17:40:44.339004 systemd-journald[176]: Journal stopped Sep 4 17:40:35.043495 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:40:35.043524 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:35.043533 kernel: BIOS-provided physical RAM map: Sep 4 17:40:35.043542 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:40:35.043548 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:40:35.043555 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:40:35.043564 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:40:35.043573 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:40:35.043582 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:40:35.043588 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:40:35.043596 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:40:35.043610 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:40:35.043619 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:40:35.043627 kernel: NX (Execute Disable) protection: active Sep 4 17:40:35.043640 kernel: APIC: Static calls initialized Sep 4 17:40:35.043648 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:40:35.043657 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 4 17:40:35.043668 kernel: SMBIOS 3.1.0 present. Sep 4 17:40:35.043675 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:40:35.043684 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:40:35.043692 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:40:35.043700 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:40:35.043709 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:40:35.043716 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:40:35.043727 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:40:35.043734 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:40:35.043745 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:40:35.043754 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:40:35.047786 kernel: tsc: Detected 2593.905 MHz processor Sep 4 17:40:35.047797 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:40:35.047805 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:40:35.047812 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:40:35.047819 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:40:35.047832 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:40:35.047839 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:40:35.047846 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:40:35.047853 kernel: Using GB pages for direct mapping Sep 4 17:40:35.047863 kernel: Secure boot disabled Sep 4 17:40:35.047871 kernel: ACPI: Early table checksum verification disabled Sep 4 17:40:35.047881 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:40:35.047894 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047906 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047916 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:40:35.047924 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:40:35.047934 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047942 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047950 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047962 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047969 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047980 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047987 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:40:35.047997 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:40:35.048005 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:40:35.048014 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:40:35.048023 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:40:35.048034 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:40:35.048043 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:40:35.048050 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:40:35.048061 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:40:35.048068 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:40:35.048078 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:40:35.048086 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:40:35.048096 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:40:35.048104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:40:35.048115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:40:35.048123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:40:35.048131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:40:35.048139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:40:35.048146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:40:35.048157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:40:35.048164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:40:35.048172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:40:35.048179 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:40:35.048189 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:40:35.048197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:40:35.048207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:40:35.048214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:40:35.048225 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:40:35.048234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:40:35.048243 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:40:35.048260 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:40:35.048271 kernel: Zone ranges: Sep 4 17:40:35.048283 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:40:35.048290 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:40:35.048298 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:40:35.048307 kernel: Movable zone start for each node Sep 4 17:40:35.048314 kernel: Early memory node ranges Sep 4 17:40:35.048321 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:40:35.048332 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:40:35.048340 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:40:35.048350 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:40:35.048362 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:40:35.048372 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:40:35.048382 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:40:35.048390 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:40:35.048398 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:40:35.048408 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:40:35.048416 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:40:35.048425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:40:35.048433 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:40:35.048445 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:40:35.048452 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:40:35.048462 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:40:35.048470 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:40:35.048480 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:40:35.048488 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:40:35.048498 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:40:35.048506 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:40:35.048516 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:40:35.048525 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:40:35.048535 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:40:35.048544 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:35.048555 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:40:35.048562 kernel: random: crng init done Sep 4 17:40:35.048572 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:40:35.048580 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:40:35.048589 kernel: Fallback order for Node 0: 0 Sep 4 17:40:35.048600 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:40:35.048617 kernel: Policy zone: Normal Sep 4 17:40:35.048625 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:40:35.048637 kernel: software IO TLB: area num 2. Sep 4 17:40:35.048646 kernel: Memory: 8077068K/8387460K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 310132K reserved, 0K cma-reserved) Sep 4 17:40:35.048656 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:40:35.048665 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:40:35.048681 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:40:35.048691 kernel: Dynamic Preempt: voluntary Sep 4 17:40:35.048702 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:40:35.048714 kernel: rcu: RCU event tracing is enabled. Sep 4 17:40:35.048725 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:40:35.048737 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:40:35.048745 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:40:35.048756 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:40:35.048775 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:40:35.048787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:40:35.048797 kernel: Using NULL legacy PIC Sep 4 17:40:35.048806 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:40:35.048816 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:40:35.048825 kernel: Console: colour dummy device 80x25 Sep 4 17:40:35.048835 kernel: printk: console [tty1] enabled Sep 4 17:40:35.048843 kernel: printk: console [ttyS0] enabled Sep 4 17:40:35.048853 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:40:35.048862 kernel: ACPI: Core revision 20230628 Sep 4 17:40:35.048872 kernel: Failed to register legacy timer interrupt Sep 4 17:40:35.048883 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:40:35.048893 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:40:35.048902 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:40:35.048912 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:40:35.048921 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:40:35.048931 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:40:35.048940 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:40:35.048950 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:40:35.048958 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:40:35.048970 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Sep 4 17:40:35.048980 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:40:35.048989 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:40:35.048998 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:40:35.049008 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:40:35.049017 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:40:35.049032 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:40:35.049043 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:40:35.049054 kernel: RETBleed: Vulnerable Sep 4 17:40:35.049067 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:40:35.049076 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:40:35.049087 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:40:35.049095 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:40:35.049106 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:40:35.049114 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:40:35.049124 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:40:35.049133 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:40:35.049142 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:40:35.049152 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:40:35.049160 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:40:35.049172 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:40:35.049181 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:40:35.049190 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:40:35.049200 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:40:35.049209 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:40:35.049218 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:40:35.049228 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:40:35.049236 kernel: landlock: Up and running. Sep 4 17:40:35.049247 kernel: SELinux: Initializing. Sep 4 17:40:35.049255 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:40:35.049265 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:40:35.049274 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:40:35.049286 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:35.049295 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:35.049305 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:40:35.049314 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:40:35.049324 kernel: signal: max sigframe size: 3632 Sep 4 17:40:35.049333 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:40:35.049343 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:40:35.049352 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:40:35.049362 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:40:35.049373 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:40:35.049387 kernel: .... node #0, CPUs: #1 Sep 4 17:40:35.049398 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:40:35.049410 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:40:35.049421 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:40:35.049430 kernel: smpboot: Max logical packages: 1 Sep 4 17:40:35.049441 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 4 17:40:35.049449 kernel: devtmpfs: initialized Sep 4 17:40:35.049462 kernel: x86/mm: Memory block size: 128MB Sep 4 17:40:35.049471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:40:35.049482 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:40:35.049490 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:40:35.049501 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:40:35.049509 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:40:35.049520 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:40:35.049528 kernel: audit: type=2000 audit(1725471633.027:1): state=initialized audit_enabled=0 res=1 Sep 4 17:40:35.050285 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:40:35.050309 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:40:35.050326 kernel: cpuidle: using governor menu Sep 4 17:40:35.050339 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:40:35.050353 kernel: dca service started, version 1.12.1 Sep 4 17:40:35.050366 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:40:35.050381 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:40:35.050396 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:40:35.050409 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:40:35.050423 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:40:35.050441 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:40:35.050455 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:40:35.050470 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:40:35.050483 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:40:35.050498 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:40:35.050514 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:40:35.050529 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:40:35.050545 kernel: ACPI: Interpreter enabled Sep 4 17:40:35.050559 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:40:35.050576 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:40:35.050591 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:40:35.050606 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:40:35.050620 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:40:35.050634 kernel: iommu: Default domain type: Translated Sep 4 17:40:35.050648 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:40:35.050663 kernel: efivars: Registered efivars operations Sep 4 17:40:35.050677 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:40:35.050692 kernel: PCI: System does not support PCI Sep 4 17:40:35.050710 kernel: vgaarb: loaded Sep 4 17:40:35.050724 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:40:35.050739 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:40:35.050754 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:40:35.051808 kernel: pnp: PnP ACPI init Sep 4 17:40:35.051826 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:40:35.051842 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:40:35.051857 kernel: NET: Registered PF_INET protocol family Sep 4 17:40:35.051873 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:40:35.051892 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:40:35.051907 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:40:35.051922 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:40:35.051937 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:40:35.051952 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:40:35.051968 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:40:35.051983 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:40:35.051997 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:40:35.052012 kernel: NET: Registered PF_XDP protocol family Sep 4 17:40:35.052030 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:40:35.052046 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:40:35.052061 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 4 17:40:35.052076 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:40:35.052092 kernel: Initialise system trusted keyrings Sep 4 17:40:35.052107 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:40:35.052122 kernel: Key type asymmetric registered Sep 4 17:40:35.052136 kernel: Asymmetric key parser 'x509' registered Sep 4 17:40:35.052151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:40:35.052169 kernel: io scheduler mq-deadline registered Sep 4 17:40:35.052184 kernel: io scheduler kyber registered Sep 4 17:40:35.052199 kernel: io scheduler bfq registered Sep 4 17:40:35.052214 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:40:35.052230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:40:35.052244 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:40:35.052259 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:40:35.052274 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:40:35.052446 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:40:35.052572 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:40:34 UTC (1725471634) Sep 4 17:40:35.052684 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:40:35.052703 kernel: intel_pstate: CPU model not supported Sep 4 17:40:35.052718 kernel: efifb: probing for efifb Sep 4 17:40:35.052733 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:40:35.052749 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:40:35.054790 kernel: efifb: scrolling: redraw Sep 4 17:40:35.054810 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:40:35.054830 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:40:35.054846 kernel: fb0: EFI VGA frame buffer device Sep 4 17:40:35.054862 kernel: pstore: Using crash dump compression: deflate Sep 4 17:40:35.054878 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:40:35.054893 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:40:35.054908 kernel: Segment Routing with IPv6 Sep 4 17:40:35.054923 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:40:35.054939 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:40:35.054954 kernel: Key type dns_resolver registered Sep 4 17:40:35.054972 kernel: IPI shorthand broadcast: enabled Sep 4 17:40:35.054987 kernel: sched_clock: Marking stable (761002900, 38464100)->(957413600, -157946600) Sep 4 17:40:35.055003 kernel: registered taskstats version 1 Sep 4 17:40:35.055018 kernel: Loading compiled-in X.509 certificates Sep 4 17:40:35.055033 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:40:35.055048 kernel: Key type .fscrypt registered Sep 4 17:40:35.055063 kernel: Key type fscrypt-provisioning registered Sep 4 17:40:35.055078 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:40:35.055096 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:40:35.055111 kernel: ima: No architecture policies found Sep 4 17:40:35.055126 kernel: clk: Disabling unused clocks Sep 4 17:40:35.055141 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:40:35.055156 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:40:35.055172 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:40:35.055187 kernel: Run /init as init process Sep 4 17:40:35.055202 kernel: with arguments: Sep 4 17:40:35.055217 kernel: /init Sep 4 17:40:35.055232 kernel: with environment: Sep 4 17:40:35.055250 kernel: HOME=/ Sep 4 17:40:35.055264 kernel: TERM=linux Sep 4 17:40:35.055279 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:40:35.055297 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:40:35.055315 systemd[1]: Detected virtualization microsoft. Sep 4 17:40:35.055332 systemd[1]: Detected architecture x86-64. Sep 4 17:40:35.055348 systemd[1]: Running in initrd. Sep 4 17:40:35.055365 systemd[1]: No hostname configured, using default hostname. Sep 4 17:40:35.055381 systemd[1]: Hostname set to . Sep 4 17:40:35.055397 systemd[1]: Initializing machine ID from random generator. Sep 4 17:40:35.055413 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:40:35.055430 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:40:35.055445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:40:35.055462 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:40:35.055483 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:40:35.055501 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:40:35.055518 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:40:35.055536 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:40:35.055552 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:40:35.055568 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:40:35.055584 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:40:35.055601 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:40:35.055619 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:40:35.055635 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:40:35.055652 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:40:35.055668 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:40:35.055684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:40:35.055700 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:40:35.055716 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:40:35.055732 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:40:35.055746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:40:35.055794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:40:35.055810 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:40:35.055826 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:40:35.055841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:40:35.055857 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:40:35.055873 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:40:35.055888 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:40:35.055904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:40:35.055923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:35.055939 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:40:35.055978 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:40:35.056015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:40:35.056035 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:40:35.056070 systemd-journald[176]: Journal started Sep 4 17:40:35.056107 systemd-journald[176]: Runtime Journal (/run/log/journal/71116bac6a4d4aada43a40c7b85d6b95) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:40:35.061558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:40:35.033965 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:40:35.067773 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:40:35.073920 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:40:35.080701 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:40:35.093773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:40:35.093889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:40:35.101265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:35.110222 kernel: Bridge firewalling registered Sep 4 17:40:35.105443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:40:35.113831 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:40:35.118945 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:35.124636 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:40:35.127363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:40:35.140992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:40:35.143642 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:35.151916 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:40:35.161263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:40:35.170958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:40:35.176978 dracut-cmdline[209]: dracut-dracut-053 Sep 4 17:40:35.180127 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:40:35.228524 systemd-resolved[215]: Positive Trust Anchors: Sep 4 17:40:35.228538 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:40:35.228593 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:40:35.251714 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 4 17:40:35.254980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:40:35.259610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:40:35.276776 kernel: SCSI subsystem initialized Sep 4 17:40:35.286780 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:40:35.297775 kernel: iscsi: registered transport (tcp) Sep 4 17:40:35.318199 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:40:35.318258 kernel: QLogic iSCSI HBA Driver Sep 4 17:40:35.352813 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:40:35.360908 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:40:35.387193 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:40:35.387246 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:40:35.390158 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:40:35.429784 kernel: raid6: avx512x4 gen() 18599 MB/s Sep 4 17:40:35.448778 kernel: raid6: avx512x2 gen() 18801 MB/s Sep 4 17:40:35.467771 kernel: raid6: avx512x1 gen() 18697 MB/s Sep 4 17:40:35.485772 kernel: raid6: avx2x4 gen() 18732 MB/s Sep 4 17:40:35.504775 kernel: raid6: avx2x2 gen() 18739 MB/s Sep 4 17:40:35.524129 kernel: raid6: avx2x1 gen() 14078 MB/s Sep 4 17:40:35.524158 kernel: raid6: using algorithm avx512x2 gen() 18801 MB/s Sep 4 17:40:35.544470 kernel: raid6: .... xor() 30413 MB/s, rmw enabled Sep 4 17:40:35.544515 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:40:35.566779 kernel: xor: automatically using best checksumming function avx Sep 4 17:40:35.718782 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:40:35.727820 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:40:35.735919 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:40:35.748927 systemd-udevd[395]: Using default interface naming scheme 'v255'. Sep 4 17:40:35.753326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:40:35.764943 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:40:35.777011 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Sep 4 17:40:35.801527 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:40:35.808877 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:40:35.846984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:40:35.856945 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:40:35.873879 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:40:35.880055 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:40:35.880143 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:40:35.880583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:40:35.903058 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:40:35.918712 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:40:35.941783 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:40:35.950217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:40:35.950466 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:35.963081 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:35.968499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:35.968901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:35.973702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:35.982974 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:40:35.983009 kernel: AES CTR mode by8 optimization enabled Sep 4 17:40:35.986778 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:40:35.991100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:35.999189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:35.999267 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:36.011503 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:40:36.011530 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:40:36.024775 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:40:36.035304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:36.039749 kernel: PTP clock support registered Sep 4 17:40:36.039787 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:40:36.048337 kernel: scsi host1: storvsc_host_t Sep 4 17:40:36.048398 kernel: scsi host0: storvsc_host_t Sep 4 17:40:36.055802 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:40:36.057027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:36.062796 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:40:36.066110 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:40:36.069785 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:40:36.074958 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:40:36.094506 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:40:36.101540 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:40:36.101574 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:40:36.107836 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:40:36.111291 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:40:36.111343 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:40:36.111913 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:35.968618 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:40:35.980196 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:40:35.980218 systemd-journald[176]: Time jumped backwards, rotating. Sep 4 17:40:35.983467 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:40:35.971180 systemd-resolved[215]: Clock change detected. Flushing caches. Sep 4 17:40:35.997337 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:40:35.997562 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:40:35.999282 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:40:36.012918 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:40:36.013130 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:40:36.018282 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:40:36.018539 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:40:36.018766 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:40:36.024567 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:40:36.024616 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:40:36.050333 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: VF slot 1 added Sep 4 17:40:36.059275 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:40:36.065529 kernel: hv_pci 64d5fa41-863d-4a4f-a194-0f3a3f11c2e9: PCI VMBus probing: Using version 0x10004 Sep 4 17:40:36.065707 kernel: hv_pci 64d5fa41-863d-4a4f-a194-0f3a3f11c2e9: PCI host bridge to bus 863d:00 Sep 4 17:40:36.070490 kernel: pci_bus 863d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:40:36.073189 kernel: pci_bus 863d:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:40:36.077332 kernel: pci 863d:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:40:36.082271 kernel: pci 863d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:40:36.085595 kernel: pci 863d:00:02.0: enabling Extended Tags Sep 4 17:40:36.096298 kernel: pci 863d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 863d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:40:36.101526 kernel: pci_bus 863d:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:40:36.101784 kernel: pci 863d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:40:36.274581 kernel: mlx5_core 863d:00:02.0: enabling device (0000 -> 0002) Sep 4 17:40:36.278276 kernel: mlx5_core 863d:00:02.0: firmware version: 14.30.1284 Sep 4 17:40:36.495343 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: VF registering: eth1 Sep 4 17:40:36.495655 kernel: mlx5_core 863d:00:02.0 eth1: joined to eth0 Sep 4 17:40:36.500310 kernel: mlx5_core 863d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:40:36.508274 kernel: mlx5_core 863d:00:02.0 enP34365s1: renamed from eth1 Sep 4 17:40:36.584392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:40:36.662295 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (441) Sep 4 17:40:36.677050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:40:36.686277 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (447) Sep 4 17:40:36.704413 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:40:36.707620 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:40:36.722222 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:40:36.734398 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:40:37.755360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:40:37.755431 disk-uuid[602]: The operation has completed successfully. Sep 4 17:40:37.834572 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:40:37.834684 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:40:37.850449 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:40:37.855610 sh[718]: Success Sep 4 17:40:37.888552 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:40:38.084762 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:40:38.096381 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:40:38.101065 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:40:38.117541 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:40:38.117591 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:38.120833 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:40:38.123342 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:40:38.125543 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:40:38.421990 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:40:38.422959 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:40:38.431507 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:40:38.437235 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:40:38.452403 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:38.452449 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:38.454797 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:40:38.475286 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:40:38.484786 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:40:38.491159 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:38.497990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:40:38.509492 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:40:38.525045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:40:38.534479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:40:38.552890 systemd-networkd[902]: lo: Link UP Sep 4 17:40:38.552898 systemd-networkd[902]: lo: Gained carrier Sep 4 17:40:38.555001 systemd-networkd[902]: Enumeration completed Sep 4 17:40:38.555229 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:40:38.557175 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:38.557179 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:40:38.558690 systemd[1]: Reached target network.target - Network. Sep 4 17:40:38.616279 kernel: mlx5_core 863d:00:02.0 enP34365s1: Link up Sep 4 17:40:38.644372 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: Data path switched to VF: enP34365s1 Sep 4 17:40:38.644690 systemd-networkd[902]: enP34365s1: Link UP Sep 4 17:40:38.644860 systemd-networkd[902]: eth0: Link UP Sep 4 17:40:38.645066 systemd-networkd[902]: eth0: Gained carrier Sep 4 17:40:38.645082 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:38.649520 systemd-networkd[902]: enP34365s1: Gained carrier Sep 4 17:40:38.680308 systemd-networkd[902]: eth0: DHCPv4 address 10.200.4.29/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 4 17:40:39.444390 ignition[877]: Ignition 2.19.0 Sep 4 17:40:39.444402 ignition[877]: Stage: fetch-offline Sep 4 17:40:39.444450 ignition[877]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:39.444462 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:39.444584 ignition[877]: parsed url from cmdline: "" Sep 4 17:40:39.444589 ignition[877]: no config URL provided Sep 4 17:40:39.444596 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:40:39.444607 ignition[877]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:40:39.444613 ignition[877]: failed to fetch config: resource requires networking Sep 4 17:40:39.446345 ignition[877]: Ignition finished successfully Sep 4 17:40:39.463562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:40:39.472445 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:40:39.488668 ignition[912]: Ignition 2.19.0 Sep 4 17:40:39.488678 ignition[912]: Stage: fetch Sep 4 17:40:39.488893 ignition[912]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:39.488906 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:39.489004 ignition[912]: parsed url from cmdline: "" Sep 4 17:40:39.489008 ignition[912]: no config URL provided Sep 4 17:40:39.489012 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:40:39.489019 ignition[912]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:40:39.489038 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:40:39.567702 ignition[912]: GET result: OK Sep 4 17:40:39.567909 ignition[912]: config has been read from IMDS userdata Sep 4 17:40:39.568021 ignition[912]: parsing config with SHA512: c4359dd1f05fe3fa8bbc0d3cc6d6ab95c26b4d04667fbdf40041591ecaa7dfd75ce30c017acb26d85567721f82875f56308af38788bcbc65c0fb827340f2c2a5 Sep 4 17:40:39.573416 unknown[912]: fetched base config from "system" Sep 4 17:40:39.573433 unknown[912]: fetched base config from "system" Sep 4 17:40:39.573443 unknown[912]: fetched user config from "azure" Sep 4 17:40:39.576354 ignition[912]: fetch: fetch complete Sep 4 17:40:39.579786 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:40:39.576363 ignition[912]: fetch: fetch passed Sep 4 17:40:39.576427 ignition[912]: Ignition finished successfully Sep 4 17:40:39.593428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:40:39.609505 ignition[918]: Ignition 2.19.0 Sep 4 17:40:39.609515 ignition[918]: Stage: kargs Sep 4 17:40:39.609735 ignition[918]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:39.612806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:40:39.609747 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:39.610617 ignition[918]: kargs: kargs passed Sep 4 17:40:39.610659 ignition[918]: Ignition finished successfully Sep 4 17:40:39.629423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:40:39.643626 ignition[924]: Ignition 2.19.0 Sep 4 17:40:39.643636 ignition[924]: Stage: disks Sep 4 17:40:39.643853 ignition[924]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:39.643862 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:39.644710 ignition[924]: disks: disks passed Sep 4 17:40:39.644747 ignition[924]: Ignition finished successfully Sep 4 17:40:39.654092 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:40:39.656446 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:40:39.660808 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:40:39.663433 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:40:39.672403 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:40:39.676677 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:40:39.684486 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:40:39.741200 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:40:39.745147 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:40:39.756385 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:40:39.850276 kernel: EXT4-fs (sda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:40:39.851083 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:40:39.853619 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:40:39.893446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:40:39.901365 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:40:39.912284 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Sep 4 17:40:39.916273 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:39.916300 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:39.917852 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:40:39.923243 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:40:39.926269 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:40:39.927390 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:40:39.927421 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:40:39.936418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:40:39.940106 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:40:39.953412 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:40:40.498456 systemd-networkd[902]: eth0: Gained IPv6LL Sep 4 17:40:40.498815 systemd-networkd[902]: enP34365s1: Gained IPv6LL Sep 4 17:40:40.568303 coreos-metadata[945]: Sep 04 17:40:40.568 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:40:40.573405 coreos-metadata[945]: Sep 04 17:40:40.573 INFO Fetch successful Sep 4 17:40:40.575802 coreos-metadata[945]: Sep 04 17:40:40.573 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:40:40.581162 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:40:40.585026 coreos-metadata[945]: Sep 04 17:40:40.585 INFO Fetch successful Sep 4 17:40:40.587083 coreos-metadata[945]: Sep 04 17:40:40.585 INFO wrote hostname ci-4054.1.0-a-6fd622a1a5 to /sysroot/etc/hostname Sep 4 17:40:40.586756 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:40:40.632385 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:40:40.638716 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:40:40.643541 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:40:41.772796 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:40:41.781399 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:40:41.788417 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:40:41.797421 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:41.796320 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:40:41.821346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:40:41.828801 ignition[1061]: INFO : Ignition 2.19.0 Sep 4 17:40:41.828801 ignition[1061]: INFO : Stage: mount Sep 4 17:40:41.834926 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:41.834926 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:41.834926 ignition[1061]: INFO : mount: mount passed Sep 4 17:40:41.834926 ignition[1061]: INFO : Ignition finished successfully Sep 4 17:40:41.830737 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:40:41.842357 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:40:41.853109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:40:41.867276 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1073) Sep 4 17:40:41.867309 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:40:41.870271 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:40:41.874115 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:40:41.879274 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:40:41.880093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:40:41.901577 ignition[1089]: INFO : Ignition 2.19.0 Sep 4 17:40:41.901577 ignition[1089]: INFO : Stage: files Sep 4 17:40:41.905377 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:41.905377 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:41.905377 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:40:41.905377 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:40:41.905377 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:40:41.975008 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:40:41.979019 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:40:41.979019 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:40:41.975501 unknown[1089]: wrote ssh authorized keys file for user: core Sep 4 17:40:42.053041 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:40:42.057625 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:40:42.057625 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:40:42.057625 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:40:42.282801 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:40:42.330061 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:40:42.330061 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:40:42.338486 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:40:42.342205 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:40:42.345916 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:40:42.349814 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:40:42.889018 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:40:43.236669 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:40:43.236669 ignition[1089]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 4 17:40:43.252381 ignition[1089]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:40:43.257431 ignition[1089]: INFO : files: files passed Sep 4 17:40:43.257431 ignition[1089]: INFO : Ignition finished successfully Sep 4 17:40:43.254419 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:40:43.305560 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:40:43.311213 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:40:43.317652 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:40:43.319892 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:40:43.354753 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:43.354753 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:43.362419 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:40:43.367576 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:40:43.370780 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:40:43.381426 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:40:43.402502 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:40:43.402605 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:40:43.407790 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:40:43.412436 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:40:43.414598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:40:43.423664 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:40:43.437095 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:40:43.448494 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:40:43.459835 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:40:43.462387 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:40:43.469704 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:40:43.471780 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:40:43.471884 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:40:43.480973 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:40:43.485553 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:40:43.489586 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:40:43.492287 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:40:43.496970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:40:43.501983 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:40:43.506579 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:40:43.511428 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:40:43.518604 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:40:43.523106 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:40:43.523618 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:40:43.523746 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:40:43.531139 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:40:43.536368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:40:43.541143 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:40:43.543497 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:40:43.546466 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:40:43.551568 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:40:43.558154 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:40:43.558359 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:40:43.563403 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:40:43.563507 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:40:43.572055 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:40:43.572223 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:40:43.587573 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:40:43.589605 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:40:43.591355 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:40:43.595699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:40:43.601527 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:40:43.601869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:40:43.608752 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:40:43.608913 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:40:43.624480 ignition[1142]: INFO : Ignition 2.19.0 Sep 4 17:40:43.624480 ignition[1142]: INFO : Stage: umount Sep 4 17:40:43.624480 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:40:43.624480 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:40:43.614424 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:40:43.641621 ignition[1142]: INFO : umount: umount passed Sep 4 17:40:43.641621 ignition[1142]: INFO : Ignition finished successfully Sep 4 17:40:43.614511 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:40:43.626988 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:40:43.627099 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:40:43.633539 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:40:43.633640 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:40:43.637374 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:40:43.637431 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:40:43.641619 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:40:43.641671 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:40:43.666934 systemd[1]: Stopped target network.target - Network. Sep 4 17:40:43.667028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:40:43.667089 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:40:43.667766 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:40:43.668082 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:40:43.673072 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:40:43.677542 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:40:43.679520 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:40:43.696153 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:40:43.696216 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:40:43.700049 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:40:43.700099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:40:43.704508 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:40:43.704564 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:40:43.705384 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:40:43.705428 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:40:43.706348 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:40:43.706588 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:40:43.708125 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:40:43.720111 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:40:43.720223 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:40:43.720510 systemd-networkd[902]: eth0: DHCPv6 lease lost Sep 4 17:40:43.724769 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:40:43.724892 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:40:43.731214 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:40:43.732994 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:40:43.759473 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:40:43.763717 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:40:43.763811 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:40:43.768809 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:40:43.768859 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:40:43.777540 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:40:43.777603 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:40:43.782054 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:40:43.782110 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:40:43.787081 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:40:43.807872 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:40:43.808042 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:40:43.812789 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:40:43.812838 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:40:43.815452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:40:43.815487 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:40:43.815762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:40:43.815801 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:40:43.816507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:40:43.816542 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:40:43.817167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:40:43.817203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:40:43.852467 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:40:43.857668 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:40:43.865512 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: Data path switched from VF: enP34365s1 Sep 4 17:40:43.857729 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:40:43.863019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:43.863072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:43.872247 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:40:43.872346 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:40:43.892960 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:40:43.893081 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:40:44.174698 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:40:44.174856 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:40:44.179470 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:40:44.185575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:40:44.185663 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:40:44.197486 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:40:44.308078 systemd[1]: Switching root. Sep 4 17:40:44.339004 systemd-journald[176]: Journal stopped Sep 4 17:40:49.015086 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Sep 4 17:40:49.015134 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:40:49.015152 kernel: SELinux: policy capability open_perms=1 Sep 4 17:40:49.015166 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:40:49.015179 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:40:49.015193 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:40:49.015208 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:40:49.015226 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:40:49.015240 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:40:49.019241 kernel: audit: type=1403 audit(1725471646.432:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:40:49.019335 systemd[1]: Successfully loaded SELinux policy in 126.515ms. Sep 4 17:40:49.019356 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.777ms. Sep 4 17:40:49.019374 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:40:49.019390 systemd[1]: Detected virtualization microsoft. Sep 4 17:40:49.019412 systemd[1]: Detected architecture x86-64. Sep 4 17:40:49.019428 systemd[1]: Detected first boot. Sep 4 17:40:49.019445 systemd[1]: Hostname set to . Sep 4 17:40:49.019461 systemd[1]: Initializing machine ID from random generator. Sep 4 17:40:49.019477 zram_generator::config[1200]: No configuration found. Sep 4 17:40:49.019497 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:40:49.019513 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:40:49.019529 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 17:40:49.019550 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:40:49.019567 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:40:49.019583 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:40:49.019600 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:40:49.019620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:40:49.019636 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:40:49.019653 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:40:49.019670 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:40:49.019686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:40:49.019703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:40:49.019720 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:40:49.019739 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:40:49.019756 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:40:49.019773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:40:49.019789 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:40:49.019805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:40:49.019822 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:40:49.019838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:40:49.019860 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:40:49.019877 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:40:49.019897 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:40:49.019914 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:40:49.019931 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:40:49.019948 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:40:49.019965 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:40:49.019982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:40:49.019999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:40:49.020019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:40:49.020036 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:40:49.020054 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:40:49.020071 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:40:49.020088 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:40:49.020108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:49.020126 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:40:49.020142 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:40:49.020159 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:40:49.020177 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:40:49.020194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:40:49.020214 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:40:49.020233 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:40:49.021364 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:40:49.021390 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:40:49.021405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:40:49.021419 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:40:49.021433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:40:49.021444 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:40:49.021454 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 4 17:40:49.021465 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 4 17:40:49.021482 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:40:49.021495 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:40:49.021506 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:40:49.021519 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:40:49.021531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:40:49.021543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:49.021556 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:40:49.021567 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:40:49.021582 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:40:49.021595 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:40:49.021607 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:40:49.021619 kernel: loop: module loaded Sep 4 17:40:49.021631 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:40:49.021642 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:40:49.021655 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:40:49.021690 systemd-journald[1295]: Collecting audit messages is disabled. Sep 4 17:40:49.021719 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:40:49.021732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:40:49.021744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:40:49.021756 kernel: ACPI: bus type drm_connector registered Sep 4 17:40:49.021768 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:40:49.021782 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:40:49.021794 kernel: fuse: init (API version 7.39) Sep 4 17:40:49.021809 systemd-journald[1295]: Journal started Sep 4 17:40:49.021833 systemd-journald[1295]: Runtime Journal (/run/log/journal/c66bd07b054f46e2907d1a8617c4f7c4) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:40:49.056276 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:40:49.032867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:40:49.033099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:40:49.036524 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:40:49.036727 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:40:49.039719 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:40:49.039989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:40:49.043147 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:40:49.046991 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:40:49.050539 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:40:49.079567 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:40:49.092357 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:40:49.105404 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:40:49.109435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:40:49.120425 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:40:49.130420 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:40:49.133654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:40:49.134620 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:40:49.136979 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:40:49.145446 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:40:49.151411 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:40:49.158660 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:40:49.161446 systemd-journald[1295]: Time spent on flushing to /var/log/journal/c66bd07b054f46e2907d1a8617c4f7c4 is 63.690ms for 944 entries. Sep 4 17:40:49.161446 systemd-journald[1295]: System Journal (/var/log/journal/c66bd07b054f46e2907d1a8617c4f7c4) is 11.9M, max 2.6G, 2.6G free. Sep 4 17:40:49.286819 systemd-journald[1295]: Received client request to flush runtime journal. Sep 4 17:40:49.286886 systemd-journald[1295]: /var/log/journal/c66bd07b054f46e2907d1a8617c4f7c4/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 4 17:40:49.286929 systemd-journald[1295]: Rotating system journal. Sep 4 17:40:49.170728 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:40:49.173682 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:40:49.176403 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:40:49.183706 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:40:49.188651 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:40:49.204406 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:40:49.228442 udevadm[1370]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:40:49.260583 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:40:49.289836 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:40:49.318947 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 4 17:40:49.318972 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 4 17:40:49.325880 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:40:49.338449 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:40:49.419717 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:40:49.431470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:40:49.446482 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Sep 4 17:40:49.446505 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Sep 4 17:40:49.452925 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:40:50.330793 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:40:50.339443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:40:50.364141 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Sep 4 17:40:50.545631 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:40:50.556529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:40:50.618302 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1408) Sep 4 17:40:50.629351 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 4 17:40:50.637353 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1408) Sep 4 17:40:50.651433 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:40:50.733974 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:40:50.747358 kernel: hv_vmbus: registering driver hv_balloon Sep 4 17:40:50.749682 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 17:40:50.749720 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 17:40:50.749746 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 17:40:50.753473 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 17:40:50.765766 kernel: Console: switching to colour dummy device 80x25 Sep 4 17:40:50.767283 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:40:50.771283 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:40:50.929462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:50.939943 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1398) Sep 4 17:40:50.974778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:50.976690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:50.992413 systemd-networkd[1395]: lo: Link UP Sep 4 17:40:50.992421 systemd-networkd[1395]: lo: Gained carrier Sep 4 17:40:50.992529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:50.994732 systemd-networkd[1395]: Enumeration completed Sep 4 17:40:50.995133 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:50.995139 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:40:51.003987 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:40:51.030082 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:40:51.053276 kernel: mlx5_core 863d:00:02.0 enP34365s1: Link up Sep 4 17:40:51.072281 kernel: hv_netvsc 6045bdd2-c502-6045-bdd2-c5026045bdd2 eth0: Data path switched to VF: enP34365s1 Sep 4 17:40:51.074232 systemd-networkd[1395]: enP34365s1: Link UP Sep 4 17:40:51.074527 systemd-networkd[1395]: eth0: Link UP Sep 4 17:40:51.074672 systemd-networkd[1395]: eth0: Gained carrier Sep 4 17:40:51.074764 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:40:51.114608 systemd-networkd[1395]: enP34365s1: Gained carrier Sep 4 17:40:51.153342 systemd-networkd[1395]: eth0: DHCPv4 address 10.200.4.29/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 4 17:40:51.195344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:40:51.216384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:40:51.216773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:51.228864 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 4 17:40:51.233708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:40:51.344666 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:40:51.353462 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:40:51.429464 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:40:51.458103 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:40:51.461401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:40:51.468418 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:40:51.473223 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:40:51.496030 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:40:51.499525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:40:51.499625 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:40:51.499652 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:40:51.500021 systemd[1]: Reached target machines.target - Containers. Sep 4 17:40:51.501934 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:40:51.514476 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:40:51.520419 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:40:51.522853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:40:51.528441 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:40:51.532469 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:40:51.542437 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:40:51.546750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:40:51.549768 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:40:51.566328 kernel: loop0: detected capacity change from 0 to 209816 Sep 4 17:40:51.587492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:40:51.610444 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:40:51.611356 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:40:51.642283 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:40:51.683290 kernel: loop1: detected capacity change from 0 to 61888 Sep 4 17:40:52.160282 kernel: loop2: detected capacity change from 0 to 140728 Sep 4 17:40:52.680287 kernel: loop3: detected capacity change from 0 to 89336 Sep 4 17:40:52.785440 systemd-networkd[1395]: enP34365s1: Gained IPv6LL Sep 4 17:40:53.002298 kernel: loop4: detected capacity change from 0 to 209816 Sep 4 17:40:53.009280 kernel: loop5: detected capacity change from 0 to 61888 Sep 4 17:40:53.014279 kernel: loop6: detected capacity change from 0 to 140728 Sep 4 17:40:53.024275 kernel: loop7: detected capacity change from 0 to 89336 Sep 4 17:40:53.028998 (sd-merge)[1513]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 17:40:53.029545 (sd-merge)[1513]: Merged extensions into '/usr'. Sep 4 17:40:53.032959 systemd[1]: Reloading requested from client PID 1498 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:40:53.032974 systemd[1]: Reloading... Sep 4 17:40:53.042346 systemd-networkd[1395]: eth0: Gained IPv6LL Sep 4 17:40:53.088382 zram_generator::config[1540]: No configuration found. Sep 4 17:40:53.243091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:40:53.315203 systemd[1]: Reloading finished in 281 ms. Sep 4 17:40:53.332937 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:40:53.336916 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:40:53.349571 systemd[1]: Starting ensure-sysext.service... Sep 4 17:40:53.353833 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:40:53.364991 systemd[1]: Reloading requested from client PID 1605 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:40:53.365007 systemd[1]: Reloading... Sep 4 17:40:53.389973 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:40:53.390813 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:40:53.394237 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:40:53.394662 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Sep 4 17:40:53.394740 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Sep 4 17:40:53.403897 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:40:53.403912 systemd-tmpfiles[1606]: Skipping /boot Sep 4 17:40:53.423499 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:40:53.423512 systemd-tmpfiles[1606]: Skipping /boot Sep 4 17:40:53.460294 zram_generator::config[1629]: No configuration found. Sep 4 17:40:53.587185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:40:53.660460 systemd[1]: Reloading finished in 294 ms. Sep 4 17:40:53.685870 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:40:53.694844 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:40:53.700509 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:40:53.706210 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:40:53.716535 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:40:53.730478 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:40:53.735953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:53.736609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:40:53.741416 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:40:53.749321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:40:53.761509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:40:53.770398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:40:53.770573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:53.774043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:40:53.776279 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:40:53.783201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:40:53.783418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:40:53.791250 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:40:53.791628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:40:53.803035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:53.804244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:40:53.813657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:40:53.823667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:40:53.839600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:40:53.846785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:40:53.847027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:53.851221 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:40:53.852055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:40:53.856146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:40:53.856398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:40:53.859951 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:40:53.860251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:40:53.866128 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:40:53.880454 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:40:53.885722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:53.886215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:40:53.892399 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:40:53.899890 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:40:53.904944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:40:53.918395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:40:53.924776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:40:53.924865 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:40:53.929717 systemd-resolved[1703]: Positive Trust Anchors: Sep 4 17:40:53.930082 systemd-resolved[1703]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:40:53.930146 systemd-resolved[1703]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:40:53.930956 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:40:53.932611 systemd[1]: Finished ensure-sysext.service. Sep 4 17:40:53.935030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:40:53.935235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:40:53.938342 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:40:53.938526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:40:53.941061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:40:53.941228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:40:53.944166 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:40:53.944346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:40:53.952295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:40:53.952408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:40:53.953895 systemd-resolved[1703]: Using system hostname 'ci-4054.1.0-a-6fd622a1a5'. Sep 4 17:40:53.954333 augenrules[1753]: No rules Sep 4 17:40:53.955928 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:40:53.958880 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:40:53.961530 systemd[1]: Reached target network.target - Network. Sep 4 17:40:53.963439 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:40:53.965945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:40:54.435820 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:40:54.439866 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:40:57.526397 ldconfig[1495]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:40:57.536549 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:40:57.547599 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:40:57.570745 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:40:57.574601 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:40:57.577628 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:40:57.580660 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:40:57.583624 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:40:57.586073 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:40:57.588748 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:40:57.591562 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:40:57.591607 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:40:57.593602 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:40:57.596439 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:40:57.600187 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:40:57.603609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:40:57.607102 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:40:57.609611 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:40:57.611781 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:40:57.620630 systemd[1]: System is tainted: cgroupsv1 Sep 4 17:40:57.620696 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:40:57.620727 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:40:57.643426 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 17:40:57.649360 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:40:57.654418 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:40:57.676522 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:40:57.682356 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:40:57.694478 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:40:57.696814 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:40:57.701450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:57.704242 (chronyd)[1775]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 17:40:57.707413 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:40:57.712734 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:40:57.719819 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:40:57.729745 jq[1780]: false Sep 4 17:40:57.733407 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:40:57.743454 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:40:57.746248 chronyd[1797]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 17:40:57.754706 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:40:57.761480 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:40:57.774405 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:40:57.788416 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:40:57.788870 dbus-daemon[1779]: [system] SELinux support is enabled Sep 4 17:40:57.797568 chronyd[1797]: Timezone right/UTC failed leap second check, ignoring Sep 4 17:40:57.797726 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:40:57.797780 chronyd[1797]: Loaded seccomp filter (level 2) Sep 4 17:40:57.807574 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 17:40:57.818653 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:40:57.818974 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:40:57.827593 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:40:57.827913 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:40:57.834197 jq[1808]: true Sep 4 17:40:57.836135 extend-filesystems[1783]: Found loop4 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found loop5 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found loop6 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found loop7 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found sda Sep 4 17:40:57.844379 extend-filesystems[1783]: Found sda1 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found sda2 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found sda3 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found usr Sep 4 17:40:57.844379 extend-filesystems[1783]: Found sda4 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found sda6 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found sda7 Sep 4 17:40:57.844379 extend-filesystems[1783]: Found sda9 Sep 4 17:40:57.844379 extend-filesystems[1783]: Checking size of /dev/sda9 Sep 4 17:40:57.912963 update_engine[1804]: I0904 17:40:57.870364 1804 main.cc:92] Flatcar Update Engine starting Sep 4 17:40:57.912963 update_engine[1804]: I0904 17:40:57.912505 1804 update_check_scheduler.cc:74] Next update check in 9m26s Sep 4 17:40:57.851748 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:40:57.864502 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:40:57.864783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:40:57.900144 systemd-logind[1798]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:40:57.904418 systemd-logind[1798]: New seat seat0. Sep 4 17:40:57.905545 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:40:57.924148 (ntainerd)[1826]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:40:57.937294 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:40:57.941983 dbus-daemon[1779]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:40:57.937345 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:40:57.943413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:40:57.954820 jq[1823]: true Sep 4 17:40:57.943437 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:40:57.964309 extend-filesystems[1783]: Old size kept for /dev/sda9 Sep 4 17:40:57.964309 extend-filesystems[1783]: Found sr0 Sep 4 17:40:57.965775 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:40:57.966008 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:40:57.983604 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:40:57.988104 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:40:57.992636 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:40:57.997194 tar[1821]: linux-amd64/helm Sep 4 17:40:58.014239 coreos-metadata[1777]: Sep 04 17:40:58.013 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:40:58.017394 coreos-metadata[1777]: Sep 04 17:40:58.017 INFO Fetch successful Sep 4 17:40:58.017394 coreos-metadata[1777]: Sep 04 17:40:58.017 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 17:40:58.024732 coreos-metadata[1777]: Sep 04 17:40:58.023 INFO Fetch successful Sep 4 17:40:58.024732 coreos-metadata[1777]: Sep 04 17:40:58.023 INFO Fetching http://168.63.129.16/machine/a8c08102-674a-4ae1-89ea-87b093724855/b5a6bc1f%2Db006%2D4db6%2D8e49%2D55d411013212.%5Fci%2D4054.1.0%2Da%2D6fd622a1a5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 17:40:58.026334 coreos-metadata[1777]: Sep 04 17:40:58.025 INFO Fetch successful Sep 4 17:40:58.026334 coreos-metadata[1777]: Sep 04 17:40:58.025 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:40:58.037281 coreos-metadata[1777]: Sep 04 17:40:58.036 INFO Fetch successful Sep 4 17:40:58.091463 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:40:58.096350 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:40:58.149081 bash[1872]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:40:58.150972 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:40:58.158056 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:40:58.215088 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1849) Sep 4 17:40:58.562875 locksmithd[1854]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:40:59.235882 sshd_keygen[1814]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:40:59.257550 tar[1821]: linux-amd64/LICENSE Sep 4 17:40:59.257956 tar[1821]: linux-amd64/README.md Sep 4 17:40:59.273916 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:40:59.301438 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:40:59.315183 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:40:59.325319 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 17:40:59.350632 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:40:59.350958 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:40:59.359718 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:40:59.366399 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 17:40:59.393848 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:40:59.405763 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:40:59.416632 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:40:59.419465 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:40:59.428407 containerd[1826]: time="2024-09-04T17:40:59.428330200Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:40:59.463318 containerd[1826]: time="2024-09-04T17:40:59.463273200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:59.464945 containerd[1826]: time="2024-09-04T17:40:59.464911800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:59.464945 containerd[1826]: time="2024-09-04T17:40:59.464940700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:40:59.465046 containerd[1826]: time="2024-09-04T17:40:59.464975000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465134200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465160800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465228800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465245400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465745500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465779100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465807600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465823100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.465927700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466218 containerd[1826]: time="2024-09-04T17:40:59.466153400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466608 containerd[1826]: time="2024-09-04T17:40:59.466371800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:40:59.466608 containerd[1826]: time="2024-09-04T17:40:59.466402000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:40:59.466608 containerd[1826]: time="2024-09-04T17:40:59.466499600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:40:59.466608 containerd[1826]: time="2024-09-04T17:40:59.466580900Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.479334300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.479431900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.479457100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.479478100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.479496500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.479655100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.480079000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.480194100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.480213100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.480230900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.480249200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.480288300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.480306800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:40:59.480785 containerd[1826]: time="2024-09-04T17:40:59.480326900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480346800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480364400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480404100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480421500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480448100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480469700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480486700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480504500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480521100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480539000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480555300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480573600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480591600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481322 containerd[1826]: time="2024-09-04T17:40:59.480618100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480637000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480659700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480678700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480699700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480727400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480743800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480758700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480816900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480838300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480851800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480869600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480882300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480907200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:40:59.481824 containerd[1826]: time="2024-09-04T17:40:59.480923200Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:40:59.482318 containerd[1826]: time="2024-09-04T17:40:59.480938500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:40:59.482369 containerd[1826]: time="2024-09-04T17:40:59.481398200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:40:59.482369 containerd[1826]: time="2024-09-04T17:40:59.481477400Z" level=info msg="Connect containerd service" Sep 4 17:40:59.482369 containerd[1826]: time="2024-09-04T17:40:59.481530700Z" level=info msg="using legacy CRI server" Sep 4 17:40:59.482369 containerd[1826]: time="2024-09-04T17:40:59.481540300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:40:59.482369 containerd[1826]: time="2024-09-04T17:40:59.481676700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:40:59.482369 containerd[1826]: time="2024-09-04T17:40:59.482307100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:40:59.482712 containerd[1826]: time="2024-09-04T17:40:59.482634400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:40:59.482712 containerd[1826]: time="2024-09-04T17:40:59.482686400Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:40:59.482812 containerd[1826]: time="2024-09-04T17:40:59.482770200Z" level=info msg="Start subscribing containerd event" Sep 4 17:40:59.482849 containerd[1826]: time="2024-09-04T17:40:59.482818000Z" level=info msg="Start recovering state" Sep 4 17:40:59.482953 containerd[1826]: time="2024-09-04T17:40:59.482885100Z" level=info msg="Start event monitor" Sep 4 17:40:59.482953 containerd[1826]: time="2024-09-04T17:40:59.482910500Z" level=info msg="Start snapshots syncer" Sep 4 17:40:59.482953 containerd[1826]: time="2024-09-04T17:40:59.482922400Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:40:59.482953 containerd[1826]: time="2024-09-04T17:40:59.482930800Z" level=info msg="Start streaming server" Sep 4 17:40:59.487224 containerd[1826]: time="2024-09-04T17:40:59.483365800Z" level=info msg="containerd successfully booted in 0.056767s" Sep 4 17:40:59.483128 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:40:59.615429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:59.619395 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:40:59.620588 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:59.622669 systemd[1]: Startup finished in 769ms (firmware) + 30.230s (loader) + 12.667s (kernel) + 13.315s (userspace) = 56.983s. Sep 4 17:41:00.092506 login[1948]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:41:00.093041 login[1951]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:41:00.106233 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:41:00.114527 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:41:00.122113 systemd-logind[1798]: New session 2 of user core. Sep 4 17:41:00.130178 systemd-logind[1798]: New session 1 of user core. Sep 4 17:41:00.139124 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:41:00.150609 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:41:00.185736 (systemd)[1976]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:41:00.359901 systemd[1976]: Queued start job for default target default.target. Sep 4 17:41:00.360704 systemd[1976]: Created slice app.slice - User Application Slice. Sep 4 17:41:00.360735 systemd[1976]: Reached target paths.target - Paths. Sep 4 17:41:00.360753 systemd[1976]: Reached target timers.target - Timers. Sep 4 17:41:00.370367 systemd[1976]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:41:00.380819 systemd[1976]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:41:00.380893 systemd[1976]: Reached target sockets.target - Sockets. Sep 4 17:41:00.380912 systemd[1976]: Reached target basic.target - Basic System. Sep 4 17:41:00.380953 systemd[1976]: Reached target default.target - Main User Target. Sep 4 17:41:00.380983 systemd[1976]: Startup finished in 185ms. Sep 4 17:41:00.381680 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:41:00.391163 kubelet[1962]: E0904 17:41:00.391068 1962 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:41:00.394583 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:41:00.396659 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:41:00.397131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:41:00.398023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:41:01.276537 waagent[1943]: 2024-09-04T17:41:01.276437Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.276873Z INFO Daemon Daemon OS: flatcar 4054.1.0 Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.277631Z INFO Daemon Daemon Python: 3.11.9 Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.278543Z INFO Daemon Daemon Run daemon Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.279191Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4054.1.0' Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.279836Z INFO Daemon Daemon Using waagent for provisioning Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.281086Z INFO Daemon Daemon Activate resource disk Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.281685Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.286032Z INFO Daemon Daemon Found device: None Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.286954Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.287664Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.290152Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:41:01.305809 waagent[1943]: 2024-09-04T17:41:01.290965Z INFO Daemon Daemon Running default provisioning handler Sep 4 17:41:01.308668 waagent[1943]: 2024-09-04T17:41:01.308611Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 17:41:01.314201 waagent[1943]: 2024-09-04T17:41:01.314153Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 17:41:01.321287 waagent[1943]: 2024-09-04T17:41:01.314346Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 17:41:01.321287 waagent[1943]: 2024-09-04T17:41:01.315071Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 17:41:01.414230 waagent[1943]: 2024-09-04T17:41:01.411914Z INFO Daemon Daemon Successfully mounted dvd Sep 4 17:41:01.426766 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 17:41:01.429014 waagent[1943]: 2024-09-04T17:41:01.428944Z INFO Daemon Daemon Detect protocol endpoint Sep 4 17:41:01.431204 waagent[1943]: 2024-09-04T17:41:01.431083Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:41:01.441423 waagent[1943]: 2024-09-04T17:41:01.431677Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 17:41:01.441423 waagent[1943]: 2024-09-04T17:41:01.432352Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 17:41:01.441423 waagent[1943]: 2024-09-04T17:41:01.433283Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 17:41:01.441423 waagent[1943]: 2024-09-04T17:41:01.433881Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 17:41:01.478338 waagent[1943]: 2024-09-04T17:41:01.478249Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 17:41:01.485718 waagent[1943]: 2024-09-04T17:41:01.478823Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 17:41:01.485718 waagent[1943]: 2024-09-04T17:41:01.479462Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 17:41:01.569303 waagent[1943]: 2024-09-04T17:41:01.569125Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 17:41:01.572297 waagent[1943]: 2024-09-04T17:41:01.572198Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 17:41:01.577969 waagent[1943]: 2024-09-04T17:41:01.577911Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:41:01.594936 waagent[1943]: 2024-09-04T17:41:01.594882Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Sep 4 17:41:01.610289 waagent[1943]: 2024-09-04T17:41:01.595510Z INFO Daemon Sep 4 17:41:01.610289 waagent[1943]: 2024-09-04T17:41:01.596190Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2d10727a-6183-439d-b59b-2cfd616a723a eTag: 17274629827384481860 source: Fabric] Sep 4 17:41:01.610289 waagent[1943]: 2024-09-04T17:41:01.597204Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 17:41:01.610289 waagent[1943]: 2024-09-04T17:41:01.598155Z INFO Daemon Sep 4 17:41:01.610289 waagent[1943]: 2024-09-04T17:41:01.598763Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:41:01.610289 waagent[1943]: 2024-09-04T17:41:01.603062Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 17:41:01.697876 waagent[1943]: 2024-09-04T17:41:01.697805Z INFO Daemon Downloaded certificate {'thumbprint': 'D1875B999C5E97A42C6BE50583B8465C03864203', 'hasPrivateKey': True} Sep 4 17:41:01.702918 waagent[1943]: 2024-09-04T17:41:01.702860Z INFO Daemon Downloaded certificate {'thumbprint': '23BCB97C1AA00467361313E4FB227B7665FF9910', 'hasPrivateKey': False} Sep 4 17:41:01.707818 waagent[1943]: 2024-09-04T17:41:01.707761Z INFO Daemon Fetch goal state completed Sep 4 17:41:01.716546 waagent[1943]: 2024-09-04T17:41:01.716501Z INFO Daemon Daemon Starting provisioning Sep 4 17:41:01.719906 waagent[1943]: 2024-09-04T17:41:01.718613Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 17:41:01.719906 waagent[1943]: 2024-09-04T17:41:01.718789Z INFO Daemon Daemon Set hostname [ci-4054.1.0-a-6fd622a1a5] Sep 4 17:41:01.741279 waagent[1943]: 2024-09-04T17:41:01.741190Z INFO Daemon Daemon Publish hostname [ci-4054.1.0-a-6fd622a1a5] Sep 4 17:41:01.748032 waagent[1943]: 2024-09-04T17:41:01.741663Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 17:41:01.748032 waagent[1943]: 2024-09-04T17:41:01.742528Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 17:41:01.771107 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:41:01.771116 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:41:01.771162 systemd-networkd[1395]: eth0: DHCP lease lost Sep 4 17:41:01.772428 waagent[1943]: 2024-09-04T17:41:01.772363Z INFO Daemon Daemon Create user account if not exists Sep 4 17:41:01.787119 waagent[1943]: 2024-09-04T17:41:01.772736Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 17:41:01.787119 waagent[1943]: 2024-09-04T17:41:01.774032Z INFO Daemon Daemon Configure sudoer Sep 4 17:41:01.787119 waagent[1943]: 2024-09-04T17:41:01.775072Z INFO Daemon Daemon Configure sshd Sep 4 17:41:01.787119 waagent[1943]: 2024-09-04T17:41:01.775843Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 17:41:01.787119 waagent[1943]: 2024-09-04T17:41:01.776568Z INFO Daemon Daemon Deploy ssh public key. Sep 4 17:41:01.788305 systemd-networkd[1395]: eth0: DHCPv6 lease lost Sep 4 17:41:01.814327 systemd-networkd[1395]: eth0: DHCPv4 address 10.200.4.29/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 4 17:41:03.076519 waagent[1943]: 2024-09-04T17:41:03.076447Z INFO Daemon Daemon Provisioning complete Sep 4 17:41:03.087987 waagent[1943]: 2024-09-04T17:41:03.087923Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 17:41:03.094057 waagent[1943]: 2024-09-04T17:41:03.088228Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 17:41:03.094057 waagent[1943]: 2024-09-04T17:41:03.089086Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 17:41:03.240034 waagent[2038]: 2024-09-04T17:41:03.239941Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 17:41:03.240504 waagent[2038]: 2024-09-04T17:41:03.240095Z INFO ExtHandler ExtHandler OS: flatcar 4054.1.0 Sep 4 17:41:03.240504 waagent[2038]: 2024-09-04T17:41:03.240176Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 4 17:41:03.262967 waagent[2038]: 2024-09-04T17:41:03.262897Z INFO ExtHandler ExtHandler Distro: flatcar-4054.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 17:41:03.263148 waagent[2038]: 2024-09-04T17:41:03.263104Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:41:03.263236 waagent[2038]: 2024-09-04T17:41:03.263195Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:41:03.270034 waagent[2038]: 2024-09-04T17:41:03.269971Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:41:03.274529 waagent[2038]: 2024-09-04T17:41:03.274477Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Sep 4 17:41:03.274957 waagent[2038]: 2024-09-04T17:41:03.274903Z INFO ExtHandler Sep 4 17:41:03.275020 waagent[2038]: 2024-09-04T17:41:03.274990Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 64ccfa9e-f9a0-4315-9c8a-74b7dad7cfcd eTag: 17274629827384481860 source: Fabric] Sep 4 17:41:03.275345 waagent[2038]: 2024-09-04T17:41:03.275295Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 17:41:03.275880 waagent[2038]: 2024-09-04T17:41:03.275825Z INFO ExtHandler Sep 4 17:41:03.275952 waagent[2038]: 2024-09-04T17:41:03.275906Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:41:03.278712 waagent[2038]: 2024-09-04T17:41:03.278671Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 17:41:03.351243 waagent[2038]: 2024-09-04T17:41:03.351123Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D1875B999C5E97A42C6BE50583B8465C03864203', 'hasPrivateKey': True} Sep 4 17:41:03.351613 waagent[2038]: 2024-09-04T17:41:03.351564Z INFO ExtHandler Downloaded certificate {'thumbprint': '23BCB97C1AA00467361313E4FB227B7665FF9910', 'hasPrivateKey': False} Sep 4 17:41:03.352011 waagent[2038]: 2024-09-04T17:41:03.351964Z INFO ExtHandler Fetch goal state completed Sep 4 17:41:03.365562 waagent[2038]: 2024-09-04T17:41:03.365508Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2038 Sep 4 17:41:03.365709 waagent[2038]: 2024-09-04T17:41:03.365664Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 17:41:03.367204 waagent[2038]: 2024-09-04T17:41:03.367149Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4054.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 17:41:03.367580 waagent[2038]: 2024-09-04T17:41:03.367531Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 17:41:03.402246 waagent[2038]: 2024-09-04T17:41:03.402199Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 17:41:03.402481 waagent[2038]: 2024-09-04T17:41:03.402429Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 17:41:03.409198 waagent[2038]: 2024-09-04T17:41:03.409067Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 17:41:03.415596 systemd[1]: Reloading requested from client PID 2053 ('systemctl') (unit waagent.service)... Sep 4 17:41:03.415611 systemd[1]: Reloading... Sep 4 17:41:03.486280 zram_generator::config[2081]: No configuration found. Sep 4 17:41:03.615794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:41:03.690302 systemd[1]: Reloading finished in 274 ms. Sep 4 17:41:03.717286 waagent[2038]: 2024-09-04T17:41:03.715593Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 17:41:03.722744 systemd[1]: Reloading requested from client PID 2146 ('systemctl') (unit waagent.service)... Sep 4 17:41:03.722759 systemd[1]: Reloading... Sep 4 17:41:03.797301 zram_generator::config[2177]: No configuration found. Sep 4 17:41:03.922718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:41:03.996136 systemd[1]: Reloading finished in 272 ms. Sep 4 17:41:04.020287 waagent[2038]: 2024-09-04T17:41:04.019586Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 17:41:04.020287 waagent[2038]: 2024-09-04T17:41:04.019782Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 17:41:04.348786 waagent[2038]: 2024-09-04T17:41:04.348645Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 17:41:04.349426 waagent[2038]: 2024-09-04T17:41:04.349366Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 17:41:04.350158 waagent[2038]: 2024-09-04T17:41:04.350097Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 17:41:04.350746 waagent[2038]: 2024-09-04T17:41:04.350673Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 17:41:04.350831 waagent[2038]: 2024-09-04T17:41:04.350753Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:41:04.350890 waagent[2038]: 2024-09-04T17:41:04.350811Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:41:04.350980 waagent[2038]: 2024-09-04T17:41:04.350938Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:41:04.351168 waagent[2038]: 2024-09-04T17:41:04.351122Z INFO EnvHandler ExtHandler Configure routes Sep 4 17:41:04.351269 waagent[2038]: 2024-09-04T17:41:04.351221Z INFO EnvHandler ExtHandler Gateway:None Sep 4 17:41:04.351374 waagent[2038]: 2024-09-04T17:41:04.351331Z INFO EnvHandler ExtHandler Routes:None Sep 4 17:41:04.351761 waagent[2038]: 2024-09-04T17:41:04.351708Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 17:41:04.351931 waagent[2038]: 2024-09-04T17:41:04.351889Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 17:41:04.352211 waagent[2038]: 2024-09-04T17:41:04.352166Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:41:04.352913 waagent[2038]: 2024-09-04T17:41:04.352864Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 17:41:04.353080 waagent[2038]: 2024-09-04T17:41:04.353041Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 17:41:04.353466 waagent[2038]: 2024-09-04T17:41:04.353422Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 17:41:04.353852 waagent[2038]: 2024-09-04T17:41:04.353813Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 17:41:04.354554 waagent[2038]: 2024-09-04T17:41:04.354511Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 17:41:04.354554 waagent[2038]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 17:41:04.354554 waagent[2038]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 17:41:04.354554 waagent[2038]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 17:41:04.354554 waagent[2038]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:41:04.354554 waagent[2038]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:41:04.354554 waagent[2038]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:41:04.364283 waagent[2038]: 2024-09-04T17:41:04.363989Z INFO ExtHandler ExtHandler Sep 4 17:41:04.364283 waagent[2038]: 2024-09-04T17:41:04.364104Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1982372b-e242-4a71-ae54-8de01d04684c correlation 59bcd4aa-3ff3-429f-b0dd-1c8cb1c960fb created: 2024-09-04T17:39:53.642128Z] Sep 4 17:41:04.364745 waagent[2038]: 2024-09-04T17:41:04.364694Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 17:41:04.367313 waagent[2038]: 2024-09-04T17:41:04.367192Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Sep 4 17:41:04.398435 waagent[2038]: 2024-09-04T17:41:04.398303Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 96934D05-1AA1-4B39-B992-7C2001BFCC84;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 17:41:04.443435 waagent[2038]: 2024-09-04T17:41:04.443145Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 17:41:04.443435 waagent[2038]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:41:04.443435 waagent[2038]: pkts bytes target prot opt in out source destination Sep 4 17:41:04.443435 waagent[2038]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:41:04.443435 waagent[2038]: pkts bytes target prot opt in out source destination Sep 4 17:41:04.443435 waagent[2038]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:41:04.443435 waagent[2038]: pkts bytes target prot opt in out source destination Sep 4 17:41:04.443435 waagent[2038]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:41:04.443435 waagent[2038]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:41:04.443435 waagent[2038]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:41:04.447086 waagent[2038]: 2024-09-04T17:41:04.447021Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 17:41:04.447086 waagent[2038]: Executing ['ip', '-a', '-o', 'link']: Sep 4 17:41:04.447086 waagent[2038]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 17:41:04.447086 waagent[2038]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d2:c5:02 brd ff:ff:ff:ff:ff:ff Sep 4 17:41:04.447086 waagent[2038]: 3: enP34365s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d2:c5:02 brd ff:ff:ff:ff:ff:ff\ altname enP34365p0s2 Sep 4 17:41:04.447086 waagent[2038]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 17:41:04.447086 waagent[2038]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 17:41:04.447086 waagent[2038]: 2: eth0 inet 10.200.4.29/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 17:41:04.447086 waagent[2038]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 17:41:04.447086 waagent[2038]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 17:41:04.447086 waagent[2038]: 2: eth0 inet6 fe80::6245:bdff:fed2:c502/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:41:04.447086 waagent[2038]: 3: enP34365s1 inet6 fe80::6245:bdff:fed2:c502/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:41:04.464401 waagent[2038]: 2024-09-04T17:41:04.463872Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 17:41:04.464401 waagent[2038]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:41:04.464401 waagent[2038]: pkts bytes target prot opt in out source destination Sep 4 17:41:04.464401 waagent[2038]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:41:04.464401 waagent[2038]: pkts bytes target prot opt in out source destination Sep 4 17:41:04.464401 waagent[2038]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:41:04.464401 waagent[2038]: pkts bytes target prot opt in out source destination Sep 4 17:41:04.464401 waagent[2038]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:41:04.464401 waagent[2038]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:41:04.464401 waagent[2038]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:41:04.464401 waagent[2038]: 2024-09-04T17:41:04.464191Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 17:41:10.648816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:41:10.654513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:10.760438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:10.763737 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:41:11.316623 kubelet[2282]: E0904 17:41:11.316561 2282 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:41:11.320695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:41:11.321018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:41:21.571501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:41:21.578480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:21.602563 chronyd[1797]: Selected source PHC0 Sep 4 17:41:21.747428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:21.750917 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:41:22.194461 kubelet[2302]: E0904 17:41:22.194399 2302 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:41:22.197207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:41:22.197563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:41:31.116218 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:41:31.127539 systemd[1]: Started sshd@0-10.200.4.29:22-10.200.16.10:48344.service - OpenSSH per-connection server daemon (10.200.16.10:48344). Sep 4 17:41:31.783232 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 48344 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:41:31.784985 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:31.789629 systemd-logind[1798]: New session 3 of user core. Sep 4 17:41:31.794545 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:41:32.292308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:41:32.298472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:32.301591 systemd[1]: Started sshd@1-10.200.4.29:22-10.200.16.10:48346.service - OpenSSH per-connection server daemon (10.200.16.10:48346). Sep 4 17:41:32.406473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:32.415622 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:41:32.905530 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 48346 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:41:32.907247 sshd[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:32.912926 systemd-logind[1798]: New session 4 of user core. Sep 4 17:41:32.921528 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:41:32.965195 kubelet[2331]: E0904 17:41:32.965147 2331 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:41:32.967762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:41:32.968082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:41:33.338875 sshd[2318]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:33.342324 systemd[1]: sshd@1-10.200.4.29:22-10.200.16.10:48346.service: Deactivated successfully. Sep 4 17:41:33.347109 systemd-logind[1798]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:41:33.347855 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:41:33.349136 systemd-logind[1798]: Removed session 4. Sep 4 17:41:33.439839 systemd[1]: Started sshd@2-10.200.4.29:22-10.200.16.10:48362.service - OpenSSH per-connection server daemon (10.200.16.10:48362). Sep 4 17:41:34.021950 sshd[2346]: Accepted publickey for core from 10.200.16.10 port 48362 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:41:34.023724 sshd[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:34.028470 systemd-logind[1798]: New session 5 of user core. Sep 4 17:41:34.038561 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:41:34.437833 sshd[2346]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:34.442494 systemd[1]: sshd@2-10.200.4.29:22-10.200.16.10:48362.service: Deactivated successfully. Sep 4 17:41:34.446715 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:41:34.447581 systemd-logind[1798]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:41:34.448524 systemd-logind[1798]: Removed session 5. Sep 4 17:41:34.545702 systemd[1]: Started sshd@3-10.200.4.29:22-10.200.16.10:48374.service - OpenSSH per-connection server daemon (10.200.16.10:48374). Sep 4 17:41:35.124692 sshd[2354]: Accepted publickey for core from 10.200.16.10 port 48374 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:41:35.126466 sshd[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:35.131118 systemd-logind[1798]: New session 6 of user core. Sep 4 17:41:35.137764 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:41:35.551880 sshd[2354]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:35.556404 systemd[1]: sshd@3-10.200.4.29:22-10.200.16.10:48374.service: Deactivated successfully. Sep 4 17:41:35.560247 systemd-logind[1798]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:41:35.560920 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:41:35.561931 systemd-logind[1798]: Removed session 6. Sep 4 17:41:35.660834 systemd[1]: Started sshd@4-10.200.4.29:22-10.200.16.10:48390.service - OpenSSH per-connection server daemon (10.200.16.10:48390). Sep 4 17:41:36.255902 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 48390 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:41:36.257557 sshd[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:36.261989 systemd-logind[1798]: New session 7 of user core. Sep 4 17:41:36.269490 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:41:36.701871 sudo[2369]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:41:36.702334 sudo[2369]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:41:36.717703 sudo[2369]: pam_unix(sudo:session): session closed for user root Sep 4 17:41:36.819391 sshd[2362]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:36.825075 systemd[1]: sshd@4-10.200.4.29:22-10.200.16.10:48390.service: Deactivated successfully. Sep 4 17:41:36.828912 systemd-logind[1798]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:41:36.829139 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:41:36.830440 systemd-logind[1798]: Removed session 7. Sep 4 17:41:36.922546 systemd[1]: Started sshd@5-10.200.4.29:22-10.200.16.10:48402.service - OpenSSH per-connection server daemon (10.200.16.10:48402). Sep 4 17:41:37.511879 sshd[2374]: Accepted publickey for core from 10.200.16.10 port 48402 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:41:37.513708 sshd[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:37.519221 systemd-logind[1798]: New session 8 of user core. Sep 4 17:41:37.525500 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:41:37.839885 sudo[2379]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:41:37.840239 sudo[2379]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:41:37.843586 sudo[2379]: pam_unix(sudo:session): session closed for user root Sep 4 17:41:37.848213 sudo[2378]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:41:37.848619 sudo[2378]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:41:37.866742 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:41:37.868183 auditctl[2382]: No rules Sep 4 17:41:37.868744 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:41:37.869055 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:41:37.873528 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:41:37.898398 augenrules[2401]: No rules Sep 4 17:41:37.899881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:41:37.902888 sudo[2378]: pam_unix(sudo:session): session closed for user root Sep 4 17:41:38.005003 sshd[2374]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:38.010159 systemd[1]: sshd@5-10.200.4.29:22-10.200.16.10:48402.service: Deactivated successfully. Sep 4 17:41:38.013083 systemd-logind[1798]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:41:38.013634 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:41:38.014830 systemd-logind[1798]: Removed session 8. Sep 4 17:41:38.113187 systemd[1]: Started sshd@6-10.200.4.29:22-10.200.16.10:48418.service - OpenSSH per-connection server daemon (10.200.16.10:48418). Sep 4 17:41:38.695702 sshd[2410]: Accepted publickey for core from 10.200.16.10 port 48418 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:41:38.697476 sshd[2410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:41:38.702947 systemd-logind[1798]: New session 9 of user core. Sep 4 17:41:38.712538 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:41:38.850801 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 4 17:41:39.021612 sudo[2414]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:41:39.021973 sudo[2414]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:41:39.452755 (dockerd)[2423]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:41:39.453135 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:41:41.114277 dockerd[2423]: time="2024-09-04T17:41:41.114206984Z" level=info msg="Starting up" Sep 4 17:41:41.611589 dockerd[2423]: time="2024-09-04T17:41:41.611547771Z" level=info msg="Loading containers: start." Sep 4 17:41:41.764290 kernel: Initializing XFRM netlink socket Sep 4 17:41:41.931582 systemd-networkd[1395]: docker0: Link UP Sep 4 17:41:41.953398 dockerd[2423]: time="2024-09-04T17:41:41.953361074Z" level=info msg="Loading containers: done." Sep 4 17:41:41.978215 dockerd[2423]: time="2024-09-04T17:41:41.978168663Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:41:41.978443 dockerd[2423]: time="2024-09-04T17:41:41.978309064Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:41:41.978443 dockerd[2423]: time="2024-09-04T17:41:41.978436465Z" level=info msg="Daemon has completed initialization" Sep 4 17:41:42.024076 dockerd[2423]: time="2024-09-04T17:41:42.024015612Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:41:42.024387 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:41:43.163523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 17:41:43.170836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:43.642241 update_engine[1804]: I0904 17:41:43.642095 1804 update_attempter.cc:509] Updating boot flags... Sep 4 17:41:43.720276 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2573) Sep 4 17:41:43.833310 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2489) Sep 4 17:41:43.941424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:43.943450 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:41:43.997783 kubelet[2634]: E0904 17:41:43.997722 2634 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:41:44.000916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:41:44.001226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:41:44.331815 containerd[1826]: time="2024-09-04T17:41:44.331693185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:41:45.239838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643549113.mount: Deactivated successfully. Sep 4 17:41:46.947641 containerd[1826]: time="2024-09-04T17:41:46.947587003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:46.950018 containerd[1826]: time="2024-09-04T17:41:46.949956820Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530743" Sep 4 17:41:46.952627 containerd[1826]: time="2024-09-04T17:41:46.952564240Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:46.957550 containerd[1826]: time="2024-09-04T17:41:46.957511576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:46.958967 containerd[1826]: time="2024-09-04T17:41:46.958549784Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 2.626814299s" Sep 4 17:41:46.958967 containerd[1826]: time="2024-09-04T17:41:46.958596584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 17:41:46.979573 containerd[1826]: time="2024-09-04T17:41:46.979545339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:41:48.890268 containerd[1826]: time="2024-09-04T17:41:48.890208748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:48.892472 containerd[1826]: time="2024-09-04T17:41:48.892413364Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849717" Sep 4 17:41:48.895633 containerd[1826]: time="2024-09-04T17:41:48.895569187Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:48.900992 containerd[1826]: time="2024-09-04T17:41:48.900933327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:48.903196 containerd[1826]: time="2024-09-04T17:41:48.902149936Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 1.922420596s" Sep 4 17:41:48.903196 containerd[1826]: time="2024-09-04T17:41:48.902196836Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 17:41:48.928747 containerd[1826]: time="2024-09-04T17:41:48.928718132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:41:50.190798 containerd[1826]: time="2024-09-04T17:41:50.190746651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:50.193537 containerd[1826]: time="2024-09-04T17:41:50.193487671Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097785" Sep 4 17:41:50.197048 containerd[1826]: time="2024-09-04T17:41:50.196996897Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:50.206670 containerd[1826]: time="2024-09-04T17:41:50.206613468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:50.207761 containerd[1826]: time="2024-09-04T17:41:50.207596975Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.278832343s" Sep 4 17:41:50.207761 containerd[1826]: time="2024-09-04T17:41:50.207633676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 17:41:50.230454 containerd[1826]: time="2024-09-04T17:41:50.230413344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:41:51.450434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673017979.mount: Deactivated successfully. Sep 4 17:41:51.895178 containerd[1826]: time="2024-09-04T17:41:51.895042236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:51.897149 containerd[1826]: time="2024-09-04T17:41:51.897096151Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303457" Sep 4 17:41:51.900669 containerd[1826]: time="2024-09-04T17:41:51.900621877Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:51.904683 containerd[1826]: time="2024-09-04T17:41:51.904653107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:51.905718 containerd[1826]: time="2024-09-04T17:41:51.905224011Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 1.674759667s" Sep 4 17:41:51.905718 containerd[1826]: time="2024-09-04T17:41:51.905283811Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 17:41:51.925401 containerd[1826]: time="2024-09-04T17:41:51.925364660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:41:52.589810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823122140.mount: Deactivated successfully. Sep 4 17:41:52.616662 containerd[1826]: time="2024-09-04T17:41:52.616619164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:52.619441 containerd[1826]: time="2024-09-04T17:41:52.619385784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Sep 4 17:41:52.622727 containerd[1826]: time="2024-09-04T17:41:52.622679709Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:52.627867 containerd[1826]: time="2024-09-04T17:41:52.627820447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:52.629199 containerd[1826]: time="2024-09-04T17:41:52.629165157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 703.762797ms" Sep 4 17:41:52.629301 containerd[1826]: time="2024-09-04T17:41:52.629198157Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:41:52.651554 containerd[1826]: time="2024-09-04T17:41:52.651509922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:41:53.379691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032939050.mount: Deactivated successfully. Sep 4 17:41:54.163660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 17:41:54.175499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:54.284434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:54.287473 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:41:55.041359 kubelet[2790]: E0904 17:41:55.041232 2790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:41:55.044038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:41:55.044378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:41:56.200699 containerd[1826]: time="2024-09-04T17:41:56.200640677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:56.203662 containerd[1826]: time="2024-09-04T17:41:56.203605588Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Sep 4 17:41:56.208776 containerd[1826]: time="2024-09-04T17:41:56.208737706Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:56.212858 containerd[1826]: time="2024-09-04T17:41:56.212803421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:56.215551 containerd[1826]: time="2024-09-04T17:41:56.214690827Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.563139005s" Sep 4 17:41:56.215551 containerd[1826]: time="2024-09-04T17:41:56.214732227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:41:56.239702 containerd[1826]: time="2024-09-04T17:41:56.239659116Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:41:56.937498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055137842.mount: Deactivated successfully. Sep 4 17:41:57.459958 containerd[1826]: time="2024-09-04T17:41:57.459900862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:57.461973 containerd[1826]: time="2024-09-04T17:41:57.461903570Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Sep 4 17:41:57.466737 containerd[1826]: time="2024-09-04T17:41:57.466699087Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:57.471126 containerd[1826]: time="2024-09-04T17:41:57.471071902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:57.471946 containerd[1826]: time="2024-09-04T17:41:57.471802105Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.232091889s" Sep 4 17:41:57.471946 containerd[1826]: time="2024-09-04T17:41:57.471844205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 17:42:00.776791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:42:00.783591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:42:00.815311 systemd[1]: Reloading requested from client PID 2883 ('systemctl') (unit session-9.scope)... Sep 4 17:42:00.815334 systemd[1]: Reloading... Sep 4 17:42:00.937338 zram_generator::config[2921]: No configuration found. Sep 4 17:42:01.056460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:42:01.130096 systemd[1]: Reloading finished in 314 ms. Sep 4 17:42:01.173776 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:42:01.173880 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:42:01.174252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:42:01.179709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:42:01.475494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:42:01.481556 (kubelet)[3002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:42:02.098593 kubelet[3002]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:42:02.098593 kubelet[3002]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:42:02.098593 kubelet[3002]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:42:02.099093 kubelet[3002]: I0904 17:42:02.098649 3002 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:42:02.464996 kubelet[3002]: I0904 17:42:02.464961 3002 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:42:02.464996 kubelet[3002]: I0904 17:42:02.464992 3002 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:42:02.465276 kubelet[3002]: I0904 17:42:02.465238 3002 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:42:02.481958 kubelet[3002]: I0904 17:42:02.481552 3002 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:42:02.481958 kubelet[3002]: E0904 17:42:02.481773 3002 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.491595 kubelet[3002]: I0904 17:42:02.491573 3002 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:42:02.493295 kubelet[3002]: I0904 17:42:02.493265 3002 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:42:02.493477 kubelet[3002]: I0904 17:42:02.493454 3002 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:42:02.493628 kubelet[3002]: I0904 17:42:02.493481 3002 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:42:02.493628 kubelet[3002]: I0904 17:42:02.493497 3002 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:42:02.494165 kubelet[3002]: I0904 17:42:02.494139 3002 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:42:02.495370 kubelet[3002]: I0904 17:42:02.495352 3002 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:42:02.495460 kubelet[3002]: I0904 17:42:02.495375 3002 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:42:02.495460 kubelet[3002]: I0904 17:42:02.495405 3002 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:42:02.495460 kubelet[3002]: I0904 17:42:02.495424 3002 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:42:02.499283 kubelet[3002]: W0904 17:42:02.498853 3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.4.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.499283 kubelet[3002]: E0904 17:42:02.498914 3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.500983 kubelet[3002]: W0904 17:42:02.500935 3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.4.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-6fd622a1a5&limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.501104 kubelet[3002]: E0904 17:42:02.501092 3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-6fd622a1a5&limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.501274 kubelet[3002]: I0904 17:42:02.501242 3002 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:42:02.503208 kubelet[3002]: W0904 17:42:02.503192 3002 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:42:02.505004 kubelet[3002]: I0904 17:42:02.504987 3002 server.go:1232] "Started kubelet" Sep 4 17:42:02.511711 kubelet[3002]: E0904 17:42:02.510522 3002 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4054.1.0-a-6fd622a1a5.17f21b63c1e3c0fb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4054.1.0-a-6fd622a1a5", UID:"ci-4054.1.0-a-6fd622a1a5", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4054.1.0-a-6fd622a1a5"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 42, 2, 504962299, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 42, 2, 504962299, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4054.1.0-a-6fd622a1a5"}': 'Post "https://10.200.4.29:6443/api/v1/namespaces/default/events": dial tcp 10.200.4.29:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:42:02.511711 kubelet[3002]: I0904 17:42:02.510767 3002 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:42:02.511711 kubelet[3002]: I0904 17:42:02.510887 3002 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:42:02.512583 kubelet[3002]: I0904 17:42:02.512566 3002 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:42:02.513954 kubelet[3002]: I0904 17:42:02.513935 3002 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:42:02.514313 kubelet[3002]: I0904 17:42:02.514287 3002 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:42:02.515007 kubelet[3002]: E0904 17:42:02.514993 3002 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:42:02.515109 kubelet[3002]: E0904 17:42:02.515099 3002 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:42:02.516986 kubelet[3002]: I0904 17:42:02.516963 3002 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:42:02.518471 kubelet[3002]: I0904 17:42:02.518452 3002 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:42:02.518551 kubelet[3002]: I0904 17:42:02.518523 3002 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:42:02.520765 kubelet[3002]: E0904 17:42:02.520749 3002 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-6fd622a1a5?timeout=10s\": dial tcp 10.200.4.29:6443: connect: connection refused" interval="200ms" Sep 4 17:42:02.523499 kubelet[3002]: W0904 17:42:02.523456 3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.4.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.523580 kubelet[3002]: E0904 17:42:02.523510 3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.571801 kubelet[3002]: I0904 17:42:02.571779 3002 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:42:02.571801 kubelet[3002]: I0904 17:42:02.571801 3002 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:42:02.571950 kubelet[3002]: I0904 17:42:02.571819 3002 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:42:02.575344 kubelet[3002]: I0904 17:42:02.575324 3002 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:42:02.577056 kubelet[3002]: I0904 17:42:02.576770 3002 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:42:02.577056 kubelet[3002]: I0904 17:42:02.576791 3002 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:42:02.577056 kubelet[3002]: I0904 17:42:02.576816 3002 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:42:02.577056 kubelet[3002]: E0904 17:42:02.576878 3002 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:42:02.578658 kubelet[3002]: I0904 17:42:02.578636 3002 policy_none.go:49] "None policy: Start" Sep 4 17:42:02.580104 kubelet[3002]: W0904 17:42:02.579767 3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.4.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.580104 kubelet[3002]: E0904 17:42:02.579803 3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:02.580693 kubelet[3002]: I0904 17:42:02.580676 3002 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:42:02.580765 kubelet[3002]: I0904 17:42:02.580702 3002 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:42:02.588274 kubelet[3002]: I0904 17:42:02.587494 3002 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:42:02.588274 kubelet[3002]: I0904 17:42:02.587740 3002 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:42:02.590456 kubelet[3002]: E0904 17:42:02.590437 3002 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054.1.0-a-6fd622a1a5\" not found" Sep 4 17:42:02.619470 kubelet[3002]: I0904 17:42:02.619435 3002 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.622081 kubelet[3002]: E0904 17:42:02.621959 3002 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.29:6443/api/v1/nodes\": dial tcp 10.200.4.29:6443: connect: connection refused" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.677763 kubelet[3002]: I0904 17:42:02.677734 3002 topology_manager.go:215] "Topology Admit Handler" podUID="9911448903c54a5f65b4064fadfcf8d0" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.679345 kubelet[3002]: I0904 17:42:02.679321 3002 topology_manager.go:215] "Topology Admit Handler" podUID="89daea9dd0997435d97217b5bd3252d9" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.680961 kubelet[3002]: I0904 17:42:02.680740 3002 topology_manager.go:215] "Topology Admit Handler" podUID="9887a205b12905cc58bef9b7998900b3" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719149 kubelet[3002]: I0904 17:42:02.718938 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9911448903c54a5f65b4064fadfcf8d0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-6fd622a1a5\" (UID: \"9911448903c54a5f65b4064fadfcf8d0\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719149 kubelet[3002]: I0904 17:42:02.718987 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719149 kubelet[3002]: I0904 17:42:02.719023 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9911448903c54a5f65b4064fadfcf8d0-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-6fd622a1a5\" (UID: \"9911448903c54a5f65b4064fadfcf8d0\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719149 kubelet[3002]: I0904 17:42:02.719059 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9911448903c54a5f65b4064fadfcf8d0-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-6fd622a1a5\" (UID: \"9911448903c54a5f65b4064fadfcf8d0\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719149 kubelet[3002]: I0904 17:42:02.719093 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719477 kubelet[3002]: I0904 17:42:02.719127 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719477 kubelet[3002]: I0904 17:42:02.719162 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719477 kubelet[3002]: I0904 17:42:02.719199 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.719477 kubelet[3002]: I0904 17:42:02.719233 3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9887a205b12905cc58bef9b7998900b3-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-6fd622a1a5\" (UID: \"9887a205b12905cc58bef9b7998900b3\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.721178 kubelet[3002]: E0904 17:42:02.721133 3002 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-6fd622a1a5?timeout=10s\": dial tcp 10.200.4.29:6443: connect: connection refused" interval="400ms" Sep 4 17:42:02.824460 kubelet[3002]: I0904 17:42:02.824431 3002 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.824847 kubelet[3002]: E0904 17:42:02.824824 3002 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.29:6443/api/v1/nodes\": dial tcp 10.200.4.29:6443: connect: connection refused" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:02.986614 containerd[1826]: time="2024-09-04T17:42:02.986478315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-6fd622a1a5,Uid:9911448903c54a5f65b4064fadfcf8d0,Namespace:kube-system,Attempt:0,}" Sep 4 17:42:02.988056 containerd[1826]: time="2024-09-04T17:42:02.987991526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-6fd622a1a5,Uid:89daea9dd0997435d97217b5bd3252d9,Namespace:kube-system,Attempt:0,}" Sep 4 17:42:02.990668 containerd[1826]: time="2024-09-04T17:42:02.990634045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-6fd622a1a5,Uid:9887a205b12905cc58bef9b7998900b3,Namespace:kube-system,Attempt:0,}" Sep 4 17:42:03.122089 kubelet[3002]: E0904 17:42:03.122045 3002 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-6fd622a1a5?timeout=10s\": dial tcp 10.200.4.29:6443: connect: connection refused" interval="800ms" Sep 4 17:42:03.227289 kubelet[3002]: I0904 17:42:03.227227 3002 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:03.227618 kubelet[3002]: E0904 17:42:03.227595 3002 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.29:6443/api/v1/nodes\": dial tcp 10.200.4.29:6443: connect: connection refused" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:03.386719 kubelet[3002]: W0904 17:42:03.386606 3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.4.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:03.386719 kubelet[3002]: E0904 17:42:03.386667 3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:03.451638 kubelet[3002]: W0904 17:42:03.451577 3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.4.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-6fd622a1a5&limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:03.451781 kubelet[3002]: E0904 17:42:03.451650 3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-6fd622a1a5&limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:03.636796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3837658395.mount: Deactivated successfully. Sep 4 17:42:03.664869 containerd[1826]: time="2024-09-04T17:42:03.664822828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:42:03.667178 containerd[1826]: time="2024-09-04T17:42:03.667136544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 4 17:42:03.669958 containerd[1826]: time="2024-09-04T17:42:03.669924964Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:42:03.672047 containerd[1826]: time="2024-09-04T17:42:03.672017179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:42:03.674841 containerd[1826]: time="2024-09-04T17:42:03.674791399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:42:03.677229 containerd[1826]: time="2024-09-04T17:42:03.677192416Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:42:03.679995 containerd[1826]: time="2024-09-04T17:42:03.679728834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:42:03.686517 containerd[1826]: time="2024-09-04T17:42:03.686480281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:42:03.687547 containerd[1826]: time="2024-09-04T17:42:03.687511989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 699.439862ms" Sep 4 17:42:03.688883 containerd[1826]: time="2024-09-04T17:42:03.688654997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.080181ms" Sep 4 17:42:03.689869 containerd[1826]: time="2024-09-04T17:42:03.689838405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 699.13406ms" Sep 4 17:42:03.755292 kubelet[3002]: W0904 17:42:03.752703 3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.4.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:03.755292 kubelet[3002]: E0904 17:42:03.752782 3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:03.922733 kubelet[3002]: E0904 17:42:03.922701 3002 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-6fd622a1a5?timeout=10s\": dial tcp 10.200.4.29:6443: connect: connection refused" interval="1.6s" Sep 4 17:42:04.030519 kubelet[3002]: I0904 17:42:04.030492 3002 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:04.030833 kubelet[3002]: E0904 17:42:04.030813 3002 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.29:6443/api/v1/nodes\": dial tcp 10.200.4.29:6443: connect: connection refused" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:04.049220 kubelet[3002]: W0904 17:42:04.049166 3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.4.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:04.049220 kubelet[3002]: E0904 17:42:04.049221 3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:04.336151 containerd[1826]: time="2024-09-04T17:42:04.335760188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:04.336151 containerd[1826]: time="2024-09-04T17:42:04.335845088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:04.336151 containerd[1826]: time="2024-09-04T17:42:04.335871688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:04.336151 containerd[1826]: time="2024-09-04T17:42:04.335983089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:04.341638 containerd[1826]: time="2024-09-04T17:42:04.341090026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:04.341638 containerd[1826]: time="2024-09-04T17:42:04.341379528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:04.341638 containerd[1826]: time="2024-09-04T17:42:04.341447428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:04.342606 containerd[1826]: time="2024-09-04T17:42:04.342389635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:04.344716 containerd[1826]: time="2024-09-04T17:42:04.344550250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:04.347484 containerd[1826]: time="2024-09-04T17:42:04.347120568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:04.347484 containerd[1826]: time="2024-09-04T17:42:04.347159269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:04.350313 containerd[1826]: time="2024-09-04T17:42:04.349861188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:04.444010 containerd[1826]: time="2024-09-04T17:42:04.443863855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-6fd622a1a5,Uid:9887a205b12905cc58bef9b7998900b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a41174f50aa37abd795efeb2a6d7dba9e00c8abeb1176c263fce04553896e68\"" Sep 4 17:42:04.457369 containerd[1826]: time="2024-09-04T17:42:04.455564238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-6fd622a1a5,Uid:89daea9dd0997435d97217b5bd3252d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a32f54e9cdd47e71d2efd718dee51fc024aae1bb269c7cc9fdb2453108eeb4b7\"" Sep 4 17:42:04.458603 containerd[1826]: time="2024-09-04T17:42:04.458573259Z" level=info msg="CreateContainer within sandbox \"0a41174f50aa37abd795efeb2a6d7dba9e00c8abeb1176c263fce04553896e68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:42:04.463071 containerd[1826]: time="2024-09-04T17:42:04.463036891Z" level=info msg="CreateContainer within sandbox \"a32f54e9cdd47e71d2efd718dee51fc024aae1bb269c7cc9fdb2453108eeb4b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:42:04.465509 containerd[1826]: time="2024-09-04T17:42:04.465476008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-6fd622a1a5,Uid:9911448903c54a5f65b4064fadfcf8d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c639b989e7b88e380fdec1394a59225d659498f6d81ac6ef8b9f8e380ccca07\"" Sep 4 17:42:04.469580 containerd[1826]: time="2024-09-04T17:42:04.469551437Z" level=info msg="CreateContainer within sandbox \"2c639b989e7b88e380fdec1394a59225d659498f6d81ac6ef8b9f8e380ccca07\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:42:04.496680 containerd[1826]: time="2024-09-04T17:42:04.496634729Z" level=info msg="CreateContainer within sandbox \"0a41174f50aa37abd795efeb2a6d7dba9e00c8abeb1176c263fce04553896e68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d400138837a3e09d5d99b4237d96aafcc5f397ab59a0cd1ad2439ae3ab03dcf6\"" Sep 4 17:42:04.497568 containerd[1826]: time="2024-09-04T17:42:04.497534835Z" level=info msg="StartContainer for \"d400138837a3e09d5d99b4237d96aafcc5f397ab59a0cd1ad2439ae3ab03dcf6\"" Sep 4 17:42:04.529575 containerd[1826]: time="2024-09-04T17:42:04.529338761Z" level=info msg="CreateContainer within sandbox \"a32f54e9cdd47e71d2efd718dee51fc024aae1bb269c7cc9fdb2453108eeb4b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0b8c5e8467b02bfb86bfbef6d22970647b5fbe24708bbae482ec9479694c8e71\"" Sep 4 17:42:04.530365 containerd[1826]: time="2024-09-04T17:42:04.530231667Z" level=info msg="StartContainer for \"0b8c5e8467b02bfb86bfbef6d22970647b5fbe24708bbae482ec9479694c8e71\"" Sep 4 17:42:04.530703 containerd[1826]: time="2024-09-04T17:42:04.530644070Z" level=info msg="CreateContainer within sandbox \"2c639b989e7b88e380fdec1394a59225d659498f6d81ac6ef8b9f8e380ccca07\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a35630432bb56f8f93c79f2e29ba4f1a18a79e59f244f47df71f14b177990a81\"" Sep 4 17:42:04.532423 containerd[1826]: time="2024-09-04T17:42:04.532398283Z" level=info msg="StartContainer for \"a35630432bb56f8f93c79f2e29ba4f1a18a79e59f244f47df71f14b177990a81\"" Sep 4 17:42:04.552448 kubelet[3002]: E0904 17:42:04.552403 3002 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.29:6443: connect: connection refused Sep 4 17:42:04.632235 containerd[1826]: time="2024-09-04T17:42:04.629782074Z" level=info msg="StartContainer for \"d400138837a3e09d5d99b4237d96aafcc5f397ab59a0cd1ad2439ae3ab03dcf6\" returns successfully" Sep 4 17:42:04.692057 containerd[1826]: time="2024-09-04T17:42:04.690713006Z" level=info msg="StartContainer for \"0b8c5e8467b02bfb86bfbef6d22970647b5fbe24708bbae482ec9479694c8e71\" returns successfully" Sep 4 17:42:04.692057 containerd[1826]: time="2024-09-04T17:42:04.690792006Z" level=info msg="StartContainer for \"a35630432bb56f8f93c79f2e29ba4f1a18a79e59f244f47df71f14b177990a81\" returns successfully" Sep 4 17:42:05.645430 kubelet[3002]: I0904 17:42:05.645327 3002 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:07.044196 kubelet[3002]: E0904 17:42:07.044157 3002 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4054.1.0-a-6fd622a1a5\" not found" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:07.126457 kubelet[3002]: I0904 17:42:07.125461 3002 kubelet_node_status.go:73] "Successfully registered node" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:07.497857 kubelet[3002]: I0904 17:42:07.497810 3002 apiserver.go:52] "Watching apiserver" Sep 4 17:42:07.519527 kubelet[3002]: I0904 17:42:07.519447 3002 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:42:07.639595 kubelet[3002]: E0904 17:42:07.639556 3002 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4054.1.0-a-6fd622a1a5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:08.347236 kubelet[3002]: W0904 17:42:08.347166 3002 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:42:09.164421 kubelet[3002]: W0904 17:42:09.164099 3002 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:42:09.669603 systemd[1]: Reloading requested from client PID 3271 ('systemctl') (unit session-9.scope)... Sep 4 17:42:09.669618 systemd[1]: Reloading... Sep 4 17:42:09.758325 zram_generator::config[3308]: No configuration found. Sep 4 17:42:09.885591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:42:09.968023 systemd[1]: Reloading finished in 297 ms. Sep 4 17:42:10.002331 kubelet[3002]: I0904 17:42:10.002247 3002 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:42:10.002585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:42:10.011664 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:42:10.011997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:42:10.019736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:42:10.118779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:42:10.130680 (kubelet)[3385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:42:10.175946 kubelet[3385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:42:10.175946 kubelet[3385]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:42:10.175946 kubelet[3385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:42:10.176446 kubelet[3385]: I0904 17:42:10.176001 3385 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:42:10.180578 kubelet[3385]: I0904 17:42:10.180550 3385 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:42:10.180578 kubelet[3385]: I0904 17:42:10.180573 3385 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:42:10.180878 kubelet[3385]: I0904 17:42:10.180857 3385 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:42:10.182439 kubelet[3385]: I0904 17:42:10.182416 3385 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:42:10.184754 kubelet[3385]: I0904 17:42:10.184735 3385 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:42:10.193700 kubelet[3385]: I0904 17:42:10.193627 3385 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:42:10.194162 kubelet[3385]: I0904 17:42:10.194141 3385 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:42:10.194376 kubelet[3385]: I0904 17:42:10.194354 3385 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:42:10.194520 kubelet[3385]: I0904 17:42:10.194383 3385 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:42:10.194520 kubelet[3385]: I0904 17:42:10.194398 3385 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:42:10.194520 kubelet[3385]: I0904 17:42:10.194453 3385 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:42:10.194657 kubelet[3385]: I0904 17:42:10.194561 3385 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:42:10.194657 kubelet[3385]: I0904 17:42:10.194578 3385 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:42:10.194657 kubelet[3385]: I0904 17:42:10.194608 3385 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:42:10.194657 kubelet[3385]: I0904 17:42:10.194632 3385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:42:10.199164 kubelet[3385]: I0904 17:42:10.196216 3385 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:42:10.199164 kubelet[3385]: I0904 17:42:10.196824 3385 server.go:1232] "Started kubelet" Sep 4 17:42:10.199164 kubelet[3385]: I0904 17:42:10.198975 3385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:42:10.211521 kubelet[3385]: E0904 17:42:10.209437 3385 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:42:10.211521 kubelet[3385]: E0904 17:42:10.209469 3385 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:42:10.213381 kubelet[3385]: I0904 17:42:10.213359 3385 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:42:10.215943 kubelet[3385]: I0904 17:42:10.215117 3385 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:42:10.217641 kubelet[3385]: I0904 17:42:10.217624 3385 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:42:10.218935 kubelet[3385]: I0904 17:42:10.217924 3385 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:42:10.222033 kubelet[3385]: I0904 17:42:10.222016 3385 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:42:10.224401 kubelet[3385]: I0904 17:42:10.224385 3385 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:42:10.224737 kubelet[3385]: I0904 17:42:10.224714 3385 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:42:10.230566 kubelet[3385]: I0904 17:42:10.230533 3385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:42:10.231903 kubelet[3385]: I0904 17:42:10.231886 3385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:42:10.231991 kubelet[3385]: I0904 17:42:10.231982 3385 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:42:10.232060 kubelet[3385]: I0904 17:42:10.232052 3385 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:42:10.232207 kubelet[3385]: E0904 17:42:10.232196 3385 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:42:10.332882 kubelet[3385]: E0904 17:42:10.332845 3385 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:42:10.338076 kubelet[3385]: I0904 17:42:10.338049 3385 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.356923 kubelet[3385]: I0904 17:42:10.356116 3385 kubelet_node_status.go:108] "Node was previously registered" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.356923 kubelet[3385]: I0904 17:42:10.356195 3385 kubelet_node_status.go:73] "Successfully registered node" node="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.389523 kubelet[3385]: I0904 17:42:10.389488 3385 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:42:10.389523 kubelet[3385]: I0904 17:42:10.389517 3385 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:42:10.389732 kubelet[3385]: I0904 17:42:10.389537 3385 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:42:10.389732 kubelet[3385]: I0904 17:42:10.389705 3385 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:42:10.389732 kubelet[3385]: I0904 17:42:10.389730 3385 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:42:10.389881 kubelet[3385]: I0904 17:42:10.389739 3385 policy_none.go:49] "None policy: Start" Sep 4 17:42:10.390493 kubelet[3385]: I0904 17:42:10.390470 3385 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:42:10.390600 kubelet[3385]: I0904 17:42:10.390498 3385 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:42:10.390729 kubelet[3385]: I0904 17:42:10.390709 3385 state_mem.go:75] "Updated machine memory state" Sep 4 17:42:10.391796 kubelet[3385]: I0904 17:42:10.391776 3385 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:42:10.392802 kubelet[3385]: I0904 17:42:10.392715 3385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:42:10.534335 kubelet[3385]: I0904 17:42:10.533019 3385 topology_manager.go:215] "Topology Admit Handler" podUID="9911448903c54a5f65b4064fadfcf8d0" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.534335 kubelet[3385]: I0904 17:42:10.533155 3385 topology_manager.go:215] "Topology Admit Handler" podUID="89daea9dd0997435d97217b5bd3252d9" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.534335 kubelet[3385]: I0904 17:42:10.533216 3385 topology_manager.go:215] "Topology Admit Handler" podUID="9887a205b12905cc58bef9b7998900b3" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.542904 kubelet[3385]: W0904 17:42:10.542653 3385 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:42:10.542904 kubelet[3385]: W0904 17:42:10.542893 3385 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:42:10.543041 kubelet[3385]: E0904 17:42:10.542957 3385 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" already exists" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.544145 kubelet[3385]: W0904 17:42:10.544111 3385 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:42:10.544236 kubelet[3385]: E0904 17:42:10.544170 3385 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4054.1.0-a-6fd622a1a5\" already exists" pod="kube-system/kube-scheduler-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626015 kubelet[3385]: I0904 17:42:10.625982 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9887a205b12905cc58bef9b7998900b3-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-6fd622a1a5\" (UID: \"9887a205b12905cc58bef9b7998900b3\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626158 kubelet[3385]: I0904 17:42:10.626077 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9911448903c54a5f65b4064fadfcf8d0-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-6fd622a1a5\" (UID: \"9911448903c54a5f65b4064fadfcf8d0\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626158 kubelet[3385]: I0904 17:42:10.626146 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626291 kubelet[3385]: I0904 17:42:10.626202 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626291 kubelet[3385]: I0904 17:42:10.626247 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626414 kubelet[3385]: I0904 17:42:10.626303 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626414 kubelet[3385]: I0904 17:42:10.626366 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89daea9dd0997435d97217b5bd3252d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-6fd622a1a5\" (UID: \"89daea9dd0997435d97217b5bd3252d9\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626414 kubelet[3385]: I0904 17:42:10.626404 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9911448903c54a5f65b4064fadfcf8d0-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-6fd622a1a5\" (UID: \"9911448903c54a5f65b4064fadfcf8d0\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:10.626577 kubelet[3385]: I0904 17:42:10.626481 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9911448903c54a5f65b4064fadfcf8d0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-6fd622a1a5\" (UID: \"9911448903c54a5f65b4064fadfcf8d0\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:11.196316 kubelet[3385]: I0904 17:42:11.195985 3385 apiserver.go:52] "Watching apiserver" Sep 4 17:42:11.224887 kubelet[3385]: I0904 17:42:11.224849 3385 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:42:11.304681 kubelet[3385]: I0904 17:42:11.304173 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4054.1.0-a-6fd622a1a5" podStartSLOduration=3.304085069 podCreationTimestamp="2024-09-04 17:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:42:11.294925793 +0000 UTC m=+1.160469318" watchObservedRunningTime="2024-09-04 17:42:11.304085069 +0000 UTC m=+1.169628694" Sep 4 17:42:11.312082 kubelet[3385]: I0904 17:42:11.311910 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4054.1.0-a-6fd622a1a5" podStartSLOduration=1.311869834 podCreationTimestamp="2024-09-04 17:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:42:11.305443781 +0000 UTC m=+1.170987306" watchObservedRunningTime="2024-09-04 17:42:11.311869834 +0000 UTC m=+1.177413359" Sep 4 17:42:11.320806 kubelet[3385]: I0904 17:42:11.320590 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-6fd622a1a5" podStartSLOduration=2.320555506 podCreationTimestamp="2024-09-04 17:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:42:11.312789242 +0000 UTC m=+1.178332867" watchObservedRunningTime="2024-09-04 17:42:11.320555506 +0000 UTC m=+1.186099131" Sep 4 17:42:14.352377 sudo[2414]: pam_unix(sudo:session): session closed for user root Sep 4 17:42:14.450248 sshd[2410]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:14.453808 systemd[1]: sshd@6-10.200.4.29:22-10.200.16.10:48418.service: Deactivated successfully. Sep 4 17:42:14.459648 systemd-logind[1798]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:42:14.460382 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:42:14.462522 systemd-logind[1798]: Removed session 9. Sep 4 17:42:23.801340 kubelet[3385]: I0904 17:42:23.800470 3385 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:42:23.803477 containerd[1826]: time="2024-09-04T17:42:23.802553697Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:42:23.804030 kubelet[3385]: I0904 17:42:23.803051 3385 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:42:24.466329 kubelet[3385]: I0904 17:42:24.466294 3385 topology_manager.go:215] "Topology Admit Handler" podUID="7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d" podNamespace="kube-system" podName="kube-proxy-tngv9" Sep 4 17:42:24.574932 kubelet[3385]: I0904 17:42:24.574899 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d-xtables-lock\") pod \"kube-proxy-tngv9\" (UID: \"7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d\") " pod="kube-system/kube-proxy-tngv9" Sep 4 17:42:24.575096 kubelet[3385]: I0904 17:42:24.574945 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d-kube-proxy\") pod \"kube-proxy-tngv9\" (UID: \"7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d\") " pod="kube-system/kube-proxy-tngv9" Sep 4 17:42:24.575096 kubelet[3385]: I0904 17:42:24.574975 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d-lib-modules\") pod \"kube-proxy-tngv9\" (UID: \"7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d\") " pod="kube-system/kube-proxy-tngv9" Sep 4 17:42:24.575096 kubelet[3385]: I0904 17:42:24.575002 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvr48\" (UniqueName: \"kubernetes.io/projected/7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d-kube-api-access-wvr48\") pod \"kube-proxy-tngv9\" (UID: \"7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d\") " pod="kube-system/kube-proxy-tngv9" Sep 4 17:42:24.772401 containerd[1826]: time="2024-09-04T17:42:24.771501538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tngv9,Uid:7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d,Namespace:kube-system,Attempt:0,}" Sep 4 17:42:24.785946 kubelet[3385]: I0904 17:42:24.785914 3385 topology_manager.go:215] "Topology Admit Handler" podUID="03e32053-37f5-4ec4-abc2-ffa1b3b3bc2a" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-cvc96" Sep 4 17:42:24.978309 kubelet[3385]: I0904 17:42:24.978223 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/03e32053-37f5-4ec4-abc2-ffa1b3b3bc2a-var-lib-calico\") pod \"tigera-operator-5d56685c77-cvc96\" (UID: \"03e32053-37f5-4ec4-abc2-ffa1b3b3bc2a\") " pod="tigera-operator/tigera-operator-5d56685c77-cvc96" Sep 4 17:42:24.978309 kubelet[3385]: I0904 17:42:24.978300 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89qxd\" (UniqueName: \"kubernetes.io/projected/03e32053-37f5-4ec4-abc2-ffa1b3b3bc2a-kube-api-access-89qxd\") pod \"tigera-operator-5d56685c77-cvc96\" (UID: \"03e32053-37f5-4ec4-abc2-ffa1b3b3bc2a\") " pod="tigera-operator/tigera-operator-5d56685c77-cvc96" Sep 4 17:42:25.093687 containerd[1826]: time="2024-09-04T17:42:25.093571678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-cvc96,Uid:03e32053-37f5-4ec4-abc2-ffa1b3b3bc2a,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:42:25.243090 containerd[1826]: time="2024-09-04T17:42:25.242244551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:25.243289 containerd[1826]: time="2024-09-04T17:42:25.243117058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:25.243289 containerd[1826]: time="2024-09-04T17:42:25.243136758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:25.243472 containerd[1826]: time="2024-09-04T17:42:25.243299559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:25.285355 containerd[1826]: time="2024-09-04T17:42:25.284881987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:25.285355 containerd[1826]: time="2024-09-04T17:42:25.284990188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:25.286149 containerd[1826]: time="2024-09-04T17:42:25.285580592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:25.287088 containerd[1826]: time="2024-09-04T17:42:25.286192497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:25.295854 containerd[1826]: time="2024-09-04T17:42:25.295482171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tngv9,Uid:7dbe55f7-ee7b-4d8f-8c8b-18fc61a2e48d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6b935e33b079399c05b4ff01a4224ea2d04d14d6d8d37aeff001c7b966af0f\"" Sep 4 17:42:25.299481 containerd[1826]: time="2024-09-04T17:42:25.299385801Z" level=info msg="CreateContainer within sandbox \"3b6b935e33b079399c05b4ff01a4224ea2d04d14d6d8d37aeff001c7b966af0f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:42:25.342467 containerd[1826]: time="2024-09-04T17:42:25.342415741Z" level=info msg="CreateContainer within sandbox \"3b6b935e33b079399c05b4ff01a4224ea2d04d14d6d8d37aeff001c7b966af0f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"22064429def168b9d375589c5d7fb7d2360657f6943d55a17f3abe2eff1310b3\"" Sep 4 17:42:25.346875 containerd[1826]: time="2024-09-04T17:42:25.346587874Z" level=info msg="StartContainer for \"22064429def168b9d375589c5d7fb7d2360657f6943d55a17f3abe2eff1310b3\"" Sep 4 17:42:25.351902 containerd[1826]: time="2024-09-04T17:42:25.351858315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-cvc96,Uid:03e32053-37f5-4ec4-abc2-ffa1b3b3bc2a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d5ca931568ff3f925363fbcc7bec533a3fb629dbfc46a852dc14afb5dabe37ab\"" Sep 4 17:42:25.354243 containerd[1826]: time="2024-09-04T17:42:25.354213534Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:42:25.404363 containerd[1826]: time="2024-09-04T17:42:25.404317729Z" level=info msg="StartContainer for \"22064429def168b9d375589c5d7fb7d2360657f6943d55a17f3abe2eff1310b3\" returns successfully" Sep 4 17:42:26.320820 kubelet[3385]: I0904 17:42:26.320751 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tngv9" podStartSLOduration=2.320641666 podCreationTimestamp="2024-09-04 17:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:42:26.320540766 +0000 UTC m=+16.186084291" watchObservedRunningTime="2024-09-04 17:42:26.320641666 +0000 UTC m=+16.186185191" Sep 4 17:42:26.679514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927918322.mount: Deactivated successfully. Sep 4 17:42:27.482223 containerd[1826]: time="2024-09-04T17:42:27.482170690Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:27.485090 containerd[1826]: time="2024-09-04T17:42:27.485027811Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136537" Sep 4 17:42:27.488805 containerd[1826]: time="2024-09-04T17:42:27.488750839Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:27.494795 containerd[1826]: time="2024-09-04T17:42:27.494731383Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:27.497299 containerd[1826]: time="2024-09-04T17:42:27.496775498Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.142514264s" Sep 4 17:42:27.497299 containerd[1826]: time="2024-09-04T17:42:27.496816498Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:42:27.504032 containerd[1826]: time="2024-09-04T17:42:27.504006752Z" level=info msg="CreateContainer within sandbox \"d5ca931568ff3f925363fbcc7bec533a3fb629dbfc46a852dc14afb5dabe37ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:42:27.538370 containerd[1826]: time="2024-09-04T17:42:27.538337907Z" level=info msg="CreateContainer within sandbox \"d5ca931568ff3f925363fbcc7bec533a3fb629dbfc46a852dc14afb5dabe37ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f8943908bfba8a3e4b67e11edb5b7642e3bb8ec9dccbda052f8a47d04d61f414\"" Sep 4 17:42:27.538811 containerd[1826]: time="2024-09-04T17:42:27.538700209Z" level=info msg="StartContainer for \"f8943908bfba8a3e4b67e11edb5b7642e3bb8ec9dccbda052f8a47d04d61f414\"" Sep 4 17:42:27.593208 containerd[1826]: time="2024-09-04T17:42:27.593152714Z" level=info msg="StartContainer for \"f8943908bfba8a3e4b67e11edb5b7642e3bb8ec9dccbda052f8a47d04d61f414\" returns successfully" Sep 4 17:42:30.245553 kubelet[3385]: I0904 17:42:30.244548 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-cvc96" podStartSLOduration=4.099101912 podCreationTimestamp="2024-09-04 17:42:24 +0000 UTC" firstStartedPulling="2024-09-04 17:42:25.353066925 +0000 UTC m=+15.218610450" lastFinishedPulling="2024-09-04 17:42:27.498471511 +0000 UTC m=+17.364015036" observedRunningTime="2024-09-04 17:42:28.32808237 +0000 UTC m=+18.193625995" watchObservedRunningTime="2024-09-04 17:42:30.244506498 +0000 UTC m=+20.110050123" Sep 4 17:42:30.523339 kubelet[3385]: I0904 17:42:30.518648 3385 topology_manager.go:215] "Topology Admit Handler" podUID="203149ef-a75d-4d10-96c1-cef5d462f78b" podNamespace="calico-system" podName="calico-typha-7bdd88b995-89t88" Sep 4 17:42:30.584281 kubelet[3385]: I0904 17:42:30.583081 3385 topology_manager.go:215] "Topology Admit Handler" podUID="572b8985-122f-4dc9-810a-bc7567e7ec05" podNamespace="calico-system" podName="calico-node-b82zs" Sep 4 17:42:30.699662 kubelet[3385]: I0904 17:42:30.699619 3385 topology_manager.go:215] "Topology Admit Handler" podUID="491350e8-d092-444b-99b6-cdf34980f429" podNamespace="calico-system" podName="csi-node-driver-sv6vb" Sep 4 17:42:30.701331 kubelet[3385]: E0904 17:42:30.701297 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:30.719278 kubelet[3385]: I0904 17:42:30.718390 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-cni-net-dir\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719278 kubelet[3385]: I0904 17:42:30.718444 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-var-lib-calico\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719278 kubelet[3385]: I0904 17:42:30.718841 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-xtables-lock\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719278 kubelet[3385]: I0904 17:42:30.718894 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-policysync\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719278 kubelet[3385]: I0904 17:42:30.718925 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/572b8985-122f-4dc9-810a-bc7567e7ec05-tigera-ca-bundle\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719551 kubelet[3385]: I0904 17:42:30.718954 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-flexvol-driver-host\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719551 kubelet[3385]: I0904 17:42:30.718984 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-cni-log-dir\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719551 kubelet[3385]: I0904 17:42:30.719011 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfv2c\" (UniqueName: \"kubernetes.io/projected/572b8985-122f-4dc9-810a-bc7567e7ec05-kube-api-access-sfv2c\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719551 kubelet[3385]: I0904 17:42:30.719037 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/572b8985-122f-4dc9-810a-bc7567e7ec05-node-certs\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719551 kubelet[3385]: I0904 17:42:30.719073 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/203149ef-a75d-4d10-96c1-cef5d462f78b-tigera-ca-bundle\") pod \"calico-typha-7bdd88b995-89t88\" (UID: \"203149ef-a75d-4d10-96c1-cef5d462f78b\") " pod="calico-system/calico-typha-7bdd88b995-89t88" Sep 4 17:42:30.719741 kubelet[3385]: I0904 17:42:30.719100 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n5sg\" (UniqueName: \"kubernetes.io/projected/203149ef-a75d-4d10-96c1-cef5d462f78b-kube-api-access-6n5sg\") pod \"calico-typha-7bdd88b995-89t88\" (UID: \"203149ef-a75d-4d10-96c1-cef5d462f78b\") " pod="calico-system/calico-typha-7bdd88b995-89t88" Sep 4 17:42:30.719741 kubelet[3385]: I0904 17:42:30.719128 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/203149ef-a75d-4d10-96c1-cef5d462f78b-typha-certs\") pod \"calico-typha-7bdd88b995-89t88\" (UID: \"203149ef-a75d-4d10-96c1-cef5d462f78b\") " pod="calico-system/calico-typha-7bdd88b995-89t88" Sep 4 17:42:30.719741 kubelet[3385]: I0904 17:42:30.719158 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-lib-modules\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719741 kubelet[3385]: I0904 17:42:30.719186 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-var-run-calico\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.719741 kubelet[3385]: I0904 17:42:30.719212 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/572b8985-122f-4dc9-810a-bc7567e7ec05-cni-bin-dir\") pod \"calico-node-b82zs\" (UID: \"572b8985-122f-4dc9-810a-bc7567e7ec05\") " pod="calico-system/calico-node-b82zs" Sep 4 17:42:30.822589 kubelet[3385]: I0904 17:42:30.819672 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/491350e8-d092-444b-99b6-cdf34980f429-socket-dir\") pod \"csi-node-driver-sv6vb\" (UID: \"491350e8-d092-444b-99b6-cdf34980f429\") " pod="calico-system/csi-node-driver-sv6vb" Sep 4 17:42:30.822589 kubelet[3385]: I0904 17:42:30.819744 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdtzw\" (UniqueName: \"kubernetes.io/projected/491350e8-d092-444b-99b6-cdf34980f429-kube-api-access-cdtzw\") pod \"csi-node-driver-sv6vb\" (UID: \"491350e8-d092-444b-99b6-cdf34980f429\") " pod="calico-system/csi-node-driver-sv6vb" Sep 4 17:42:30.822589 kubelet[3385]: I0904 17:42:30.819804 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/491350e8-d092-444b-99b6-cdf34980f429-registration-dir\") pod \"csi-node-driver-sv6vb\" (UID: \"491350e8-d092-444b-99b6-cdf34980f429\") " pod="calico-system/csi-node-driver-sv6vb" Sep 4 17:42:30.822589 kubelet[3385]: I0904 17:42:30.819854 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/491350e8-d092-444b-99b6-cdf34980f429-varrun\") pod \"csi-node-driver-sv6vb\" (UID: \"491350e8-d092-444b-99b6-cdf34980f429\") " pod="calico-system/csi-node-driver-sv6vb" Sep 4 17:42:30.822589 kubelet[3385]: I0904 17:42:30.819879 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/491350e8-d092-444b-99b6-cdf34980f429-kubelet-dir\") pod \"csi-node-driver-sv6vb\" (UID: \"491350e8-d092-444b-99b6-cdf34980f429\") " pod="calico-system/csi-node-driver-sv6vb" Sep 4 17:42:31.525448 kubelet[3385]: E0904 17:42:31.525408 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.525448 kubelet[3385]: W0904 17:42:31.525430 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.525448 kubelet[3385]: E0904 17:42:31.525458 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.526587 kubelet[3385]: E0904 17:42:31.525750 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.526587 kubelet[3385]: W0904 17:42:31.525763 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.526587 kubelet[3385]: E0904 17:42:31.525786 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.526587 kubelet[3385]: E0904 17:42:31.526033 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.526587 kubelet[3385]: W0904 17:42:31.526046 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.526587 kubelet[3385]: E0904 17:42:31.526067 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.526587 kubelet[3385]: E0904 17:42:31.526386 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.526587 kubelet[3385]: W0904 17:42:31.526400 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.526587 kubelet[3385]: E0904 17:42:31.526422 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.527158 kubelet[3385]: E0904 17:42:31.526636 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.527158 kubelet[3385]: W0904 17:42:31.526646 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.527158 kubelet[3385]: E0904 17:42:31.526662 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.527158 kubelet[3385]: E0904 17:42:31.526845 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.527158 kubelet[3385]: W0904 17:42:31.526854 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.527158 kubelet[3385]: E0904 17:42:31.526869 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.527158 kubelet[3385]: E0904 17:42:31.527050 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.527158 kubelet[3385]: W0904 17:42:31.527059 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.527158 kubelet[3385]: E0904 17:42:31.527074 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.527494 kubelet[3385]: E0904 17:42:31.527293 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.527494 kubelet[3385]: W0904 17:42:31.527303 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.527494 kubelet[3385]: E0904 17:42:31.527318 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.541610 kubelet[3385]: E0904 17:42:31.541590 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.541742 kubelet[3385]: W0904 17:42:31.541729 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.541831 kubelet[3385]: E0904 17:42:31.541822 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.555288 kubelet[3385]: E0904 17:42:31.553568 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.561141 kubelet[3385]: W0904 17:42:31.559424 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.561141 kubelet[3385]: E0904 17:42:31.559453 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.562908 kubelet[3385]: E0904 17:42:31.562876 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.563063 kubelet[3385]: W0904 17:42:31.562992 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.563063 kubelet[3385]: E0904 17:42:31.563021 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.566288 kubelet[3385]: E0904 17:42:31.564206 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.566288 kubelet[3385]: W0904 17:42:31.564221 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.566288 kubelet[3385]: E0904 17:42:31.564345 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.566288 kubelet[3385]: E0904 17:42:31.565069 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.566288 kubelet[3385]: W0904 17:42:31.565080 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.566288 kubelet[3385]: E0904 17:42:31.565226 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.570230 kubelet[3385]: E0904 17:42:31.568321 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.570230 kubelet[3385]: W0904 17:42:31.568337 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.570230 kubelet[3385]: E0904 17:42:31.568486 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.570230 kubelet[3385]: E0904 17:42:31.569526 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.570230 kubelet[3385]: W0904 17:42:31.569537 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.570230 kubelet[3385]: E0904 17:42:31.569556 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.573451 kubelet[3385]: E0904 17:42:31.573434 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.573451 kubelet[3385]: W0904 17:42:31.573451 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.573582 kubelet[3385]: E0904 17:42:31.573469 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.624919 kubelet[3385]: E0904 17:42:31.624879 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.624919 kubelet[3385]: W0904 17:42:31.624909 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.626131 kubelet[3385]: E0904 17:42:31.624939 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.626131 kubelet[3385]: E0904 17:42:31.625161 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.626131 kubelet[3385]: W0904 17:42:31.625173 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.626131 kubelet[3385]: E0904 17:42:31.625191 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.626131 kubelet[3385]: E0904 17:42:31.625404 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.626131 kubelet[3385]: W0904 17:42:31.625416 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.626131 kubelet[3385]: E0904 17:42:31.625434 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.626131 kubelet[3385]: E0904 17:42:31.625631 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.626131 kubelet[3385]: W0904 17:42:31.625642 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.626131 kubelet[3385]: E0904 17:42:31.625658 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.628268 kubelet[3385]: E0904 17:42:31.625863 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.628268 kubelet[3385]: W0904 17:42:31.625873 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.628268 kubelet[3385]: E0904 17:42:31.625888 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.628268 kubelet[3385]: E0904 17:42:31.626069 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.628268 kubelet[3385]: W0904 17:42:31.626079 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.628268 kubelet[3385]: E0904 17:42:31.626094 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.628268 kubelet[3385]: E0904 17:42:31.626280 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.628268 kubelet[3385]: W0904 17:42:31.626290 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.628268 kubelet[3385]: E0904 17:42:31.626305 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.628268 kubelet[3385]: E0904 17:42:31.626485 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.628694 kubelet[3385]: W0904 17:42:31.626495 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.628694 kubelet[3385]: E0904 17:42:31.626510 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.628694 kubelet[3385]: E0904 17:42:31.626711 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.628694 kubelet[3385]: W0904 17:42:31.626721 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.628694 kubelet[3385]: E0904 17:42:31.626736 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.628694 kubelet[3385]: E0904 17:42:31.626910 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.628694 kubelet[3385]: W0904 17:42:31.626919 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.628694 kubelet[3385]: E0904 17:42:31.626936 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.628694 kubelet[3385]: E0904 17:42:31.627115 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.628694 kubelet[3385]: W0904 17:42:31.627125 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.629115 kubelet[3385]: E0904 17:42:31.627140 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.629115 kubelet[3385]: E0904 17:42:31.627351 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.629115 kubelet[3385]: W0904 17:42:31.627364 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.629115 kubelet[3385]: E0904 17:42:31.627380 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.629115 kubelet[3385]: E0904 17:42:31.627587 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.629115 kubelet[3385]: W0904 17:42:31.627598 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.629115 kubelet[3385]: E0904 17:42:31.627616 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.629115 kubelet[3385]: E0904 17:42:31.627804 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.629115 kubelet[3385]: W0904 17:42:31.627815 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.629115 kubelet[3385]: E0904 17:42:31.627831 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.630907 kubelet[3385]: E0904 17:42:31.628010 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.630907 kubelet[3385]: W0904 17:42:31.628020 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.630907 kubelet[3385]: E0904 17:42:31.628035 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.630907 kubelet[3385]: E0904 17:42:31.628350 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.630907 kubelet[3385]: W0904 17:42:31.628361 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.630907 kubelet[3385]: E0904 17:42:31.628378 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.630907 kubelet[3385]: E0904 17:42:31.628555 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.630907 kubelet[3385]: W0904 17:42:31.628564 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.630907 kubelet[3385]: E0904 17:42:31.628580 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.630907 kubelet[3385]: E0904 17:42:31.628743 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.631705 kubelet[3385]: W0904 17:42:31.628752 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.631705 kubelet[3385]: E0904 17:42:31.628766 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.631705 kubelet[3385]: E0904 17:42:31.628937 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.631705 kubelet[3385]: W0904 17:42:31.628945 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.631705 kubelet[3385]: E0904 17:42:31.628962 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.631705 kubelet[3385]: E0904 17:42:31.629168 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.631705 kubelet[3385]: W0904 17:42:31.629178 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.631705 kubelet[3385]: E0904 17:42:31.629194 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.631705 kubelet[3385]: E0904 17:42:31.629438 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.631705 kubelet[3385]: W0904 17:42:31.629448 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.633683 kubelet[3385]: E0904 17:42:31.629467 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.633683 kubelet[3385]: E0904 17:42:31.629647 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.633683 kubelet[3385]: W0904 17:42:31.629657 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.633683 kubelet[3385]: E0904 17:42:31.629673 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.633683 kubelet[3385]: E0904 17:42:31.629841 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.633683 kubelet[3385]: W0904 17:42:31.629849 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.633683 kubelet[3385]: E0904 17:42:31.629863 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.633683 kubelet[3385]: E0904 17:42:31.630066 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.633683 kubelet[3385]: W0904 17:42:31.630075 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.633683 kubelet[3385]: E0904 17:42:31.630091 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.634615 kubelet[3385]: E0904 17:42:31.630269 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.634615 kubelet[3385]: W0904 17:42:31.630292 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.634615 kubelet[3385]: E0904 17:42:31.630308 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.634615 kubelet[3385]: E0904 17:42:31.630502 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.634615 kubelet[3385]: W0904 17:42:31.630512 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.634615 kubelet[3385]: E0904 17:42:31.630527 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.634615 kubelet[3385]: E0904 17:42:31.630710 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.634615 kubelet[3385]: W0904 17:42:31.630761 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.634615 kubelet[3385]: E0904 17:42:31.630780 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.634615 kubelet[3385]: E0904 17:42:31.631086 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.635621 kubelet[3385]: W0904 17:42:31.631097 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.635621 kubelet[3385]: E0904 17:42:31.631129 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.635621 kubelet[3385]: E0904 17:42:31.631363 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.635621 kubelet[3385]: W0904 17:42:31.631374 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.635621 kubelet[3385]: E0904 17:42:31.631393 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.635621 kubelet[3385]: E0904 17:42:31.632062 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.635621 kubelet[3385]: W0904 17:42:31.632074 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.635621 kubelet[3385]: E0904 17:42:31.632091 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.635621 kubelet[3385]: E0904 17:42:31.632443 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.635621 kubelet[3385]: W0904 17:42:31.632455 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.636497 kubelet[3385]: E0904 17:42:31.632471 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.636497 kubelet[3385]: E0904 17:42:31.632700 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.636497 kubelet[3385]: W0904 17:42:31.632709 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.636497 kubelet[3385]: E0904 17:42:31.632736 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.636497 kubelet[3385]: E0904 17:42:31.632913 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.636497 kubelet[3385]: W0904 17:42:31.632921 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.636497 kubelet[3385]: E0904 17:42:31.632936 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.636497 kubelet[3385]: E0904 17:42:31.633457 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.636497 kubelet[3385]: W0904 17:42:31.633468 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.636497 kubelet[3385]: E0904 17:42:31.633484 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.636866 kubelet[3385]: E0904 17:42:31.634107 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.636866 kubelet[3385]: W0904 17:42:31.634118 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.636866 kubelet[3385]: E0904 17:42:31.634135 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.636866 kubelet[3385]: E0904 17:42:31.634912 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.636866 kubelet[3385]: W0904 17:42:31.634923 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.636866 kubelet[3385]: E0904 17:42:31.635139 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.636866 kubelet[3385]: E0904 17:42:31.635480 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.636866 kubelet[3385]: W0904 17:42:31.635491 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.636866 kubelet[3385]: E0904 17:42:31.635512 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.636866 kubelet[3385]: E0904 17:42:31.636216 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.637688 kubelet[3385]: W0904 17:42:31.636228 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.637688 kubelet[3385]: E0904 17:42:31.636249 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.638029 kubelet[3385]: E0904 17:42:31.638006 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.638184 kubelet[3385]: W0904 17:42:31.638067 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.638184 kubelet[3385]: E0904 17:42:31.638096 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.638442 kubelet[3385]: E0904 17:42:31.638404 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.638442 kubelet[3385]: W0904 17:42:31.638418 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.638442 kubelet[3385]: E0904 17:42:31.638439 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.638993 kubelet[3385]: E0904 17:42:31.638776 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.638993 kubelet[3385]: W0904 17:42:31.638790 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.638993 kubelet[3385]: E0904 17:42:31.638947 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.639490 kubelet[3385]: E0904 17:42:31.639359 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.639490 kubelet[3385]: W0904 17:42:31.639383 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.639490 kubelet[3385]: E0904 17:42:31.639468 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.639982 kubelet[3385]: E0904 17:42:31.639880 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.639982 kubelet[3385]: W0904 17:42:31.639894 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.640316 kubelet[3385]: E0904 17:42:31.640037 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.640600 kubelet[3385]: E0904 17:42:31.640558 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.640816 kubelet[3385]: W0904 17:42:31.640707 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.640816 kubelet[3385]: E0904 17:42:31.640789 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.641246 kubelet[3385]: E0904 17:42:31.641151 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.641246 kubelet[3385]: W0904 17:42:31.641164 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.641533 kubelet[3385]: E0904 17:42:31.641401 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.641715 kubelet[3385]: E0904 17:42:31.641630 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.641715 kubelet[3385]: W0904 17:42:31.641641 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.641992 kubelet[3385]: E0904 17:42:31.641900 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.641992 kubelet[3385]: E0904 17:42:31.641969 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.641992 kubelet[3385]: W0904 17:42:31.641977 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.642341 kubelet[3385]: E0904 17:42:31.642298 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.642488 kubelet[3385]: E0904 17:42:31.642460 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.642488 kubelet[3385]: W0904 17:42:31.642471 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.642726 kubelet[3385]: E0904 17:42:31.642663 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.643058 kubelet[3385]: E0904 17:42:31.642950 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.643058 kubelet[3385]: W0904 17:42:31.642963 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.643489 kubelet[3385]: E0904 17:42:31.643201 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.643735 kubelet[3385]: E0904 17:42:31.643714 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.643827 kubelet[3385]: W0904 17:42:31.643813 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.644197 kubelet[3385]: E0904 17:42:31.644090 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.644197 kubelet[3385]: W0904 17:42:31.644104 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.644197 kubelet[3385]: E0904 17:42:31.644148 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.644197 kubelet[3385]: E0904 17:42:31.644173 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.644501 kubelet[3385]: E0904 17:42:31.644333 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.644501 kubelet[3385]: W0904 17:42:31.644344 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.644501 kubelet[3385]: E0904 17:42:31.644431 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.645670 kubelet[3385]: E0904 17:42:31.644650 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.645670 kubelet[3385]: W0904 17:42:31.644665 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.645670 kubelet[3385]: E0904 17:42:31.644692 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.645670 kubelet[3385]: E0904 17:42:31.644922 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.645670 kubelet[3385]: W0904 17:42:31.644935 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.645670 kubelet[3385]: E0904 17:42:31.644963 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.645670 kubelet[3385]: E0904 17:42:31.645185 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.645670 kubelet[3385]: W0904 17:42:31.645196 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.645670 kubelet[3385]: E0904 17:42:31.645220 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.645670 kubelet[3385]: E0904 17:42:31.645467 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.646138 kubelet[3385]: W0904 17:42:31.645478 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.646138 kubelet[3385]: E0904 17:42:31.645573 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.646138 kubelet[3385]: E0904 17:42:31.645800 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.646138 kubelet[3385]: W0904 17:42:31.645811 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.646138 kubelet[3385]: E0904 17:42:31.645873 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.646138 kubelet[3385]: E0904 17:42:31.646059 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.646138 kubelet[3385]: W0904 17:42:31.646069 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.647878 kubelet[3385]: E0904 17:42:31.646237 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.647878 kubelet[3385]: E0904 17:42:31.646696 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.647878 kubelet[3385]: W0904 17:42:31.646708 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.647878 kubelet[3385]: E0904 17:42:31.646739 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.647878 kubelet[3385]: E0904 17:42:31.646946 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.647878 kubelet[3385]: W0904 17:42:31.646956 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.647878 kubelet[3385]: E0904 17:42:31.647050 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.647878 kubelet[3385]: E0904 17:42:31.647188 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.647878 kubelet[3385]: W0904 17:42:31.647198 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.647878 kubelet[3385]: E0904 17:42:31.647284 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.648705 kubelet[3385]: E0904 17:42:31.647616 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.648705 kubelet[3385]: W0904 17:42:31.647627 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.648705 kubelet[3385]: E0904 17:42:31.647656 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.648705 kubelet[3385]: E0904 17:42:31.647927 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.648705 kubelet[3385]: W0904 17:42:31.647939 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.648705 kubelet[3385]: E0904 17:42:31.647960 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.648995 kubelet[3385]: E0904 17:42:31.648983 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.649195 kubelet[3385]: W0904 17:42:31.649084 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.649195 kubelet[3385]: E0904 17:42:31.649110 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.649605 kubelet[3385]: E0904 17:42:31.649594 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.649705 kubelet[3385]: W0904 17:42:31.649648 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.650016 kubelet[3385]: E0904 17:42:31.649861 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.650166 kubelet[3385]: E0904 17:42:31.650149 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.650166 kubelet[3385]: W0904 17:42:31.650160 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.650460 kubelet[3385]: E0904 17:42:31.650184 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.650521 kubelet[3385]: E0904 17:42:31.650500 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.650521 kubelet[3385]: W0904 17:42:31.650511 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.650624 kubelet[3385]: E0904 17:42:31.650528 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.651597 kubelet[3385]: E0904 17:42:31.650702 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.651597 kubelet[3385]: W0904 17:42:31.650714 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.651597 kubelet[3385]: E0904 17:42:31.650728 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.651597 kubelet[3385]: E0904 17:42:31.650930 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.651597 kubelet[3385]: W0904 17:42:31.650940 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.651597 kubelet[3385]: E0904 17:42:31.650955 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.651597 kubelet[3385]: E0904 17:42:31.651239 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.651597 kubelet[3385]: W0904 17:42:31.651250 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.651597 kubelet[3385]: E0904 17:42:31.651287 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.652149 kubelet[3385]: E0904 17:42:31.652113 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.652299 kubelet[3385]: W0904 17:42:31.652132 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.652299 kubelet[3385]: E0904 17:42:31.652242 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.652751 kubelet[3385]: E0904 17:42:31.652647 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.652751 kubelet[3385]: W0904 17:42:31.652661 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.652751 kubelet[3385]: E0904 17:42:31.652680 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.653213 kubelet[3385]: E0904 17:42:31.653164 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.653213 kubelet[3385]: W0904 17:42:31.653180 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.653465 kubelet[3385]: E0904 17:42:31.653357 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.653631 kubelet[3385]: E0904 17:42:31.653605 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.654754 kubelet[3385]: W0904 17:42:31.653790 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.654754 kubelet[3385]: E0904 17:42:31.653812 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.654754 kubelet[3385]: E0904 17:42:31.654030 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.654754 kubelet[3385]: W0904 17:42:31.654040 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.654754 kubelet[3385]: E0904 17:42:31.654056 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.654754 kubelet[3385]: E0904 17:42:31.654234 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.654754 kubelet[3385]: W0904 17:42:31.654244 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.654754 kubelet[3385]: E0904 17:42:31.654287 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.654754 kubelet[3385]: E0904 17:42:31.654480 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.654754 kubelet[3385]: W0904 17:42:31.654491 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.655177 kubelet[3385]: E0904 17:42:31.654506 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.655177 kubelet[3385]: E0904 17:42:31.654720 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.655177 kubelet[3385]: W0904 17:42:31.654730 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.655177 kubelet[3385]: E0904 17:42:31.654746 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.655937 kubelet[3385]: E0904 17:42:31.655389 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.655937 kubelet[3385]: W0904 17:42:31.655405 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.655937 kubelet[3385]: E0904 17:42:31.655422 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.655937 kubelet[3385]: E0904 17:42:31.655713 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.655937 kubelet[3385]: W0904 17:42:31.655723 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.655937 kubelet[3385]: E0904 17:42:31.655770 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.656892 kubelet[3385]: E0904 17:42:31.656121 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.656892 kubelet[3385]: W0904 17:42:31.656132 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.656892 kubelet[3385]: E0904 17:42:31.656149 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.656892 kubelet[3385]: E0904 17:42:31.656405 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.656892 kubelet[3385]: W0904 17:42:31.656416 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.656892 kubelet[3385]: E0904 17:42:31.656488 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.656892 kubelet[3385]: E0904 17:42:31.656711 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.656892 kubelet[3385]: W0904 17:42:31.656721 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.656892 kubelet[3385]: E0904 17:42:31.656738 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.657833 kubelet[3385]: E0904 17:42:31.656968 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.657833 kubelet[3385]: W0904 17:42:31.656977 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.657833 kubelet[3385]: E0904 17:42:31.656997 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.657833 kubelet[3385]: E0904 17:42:31.657177 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.657833 kubelet[3385]: W0904 17:42:31.657187 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.657833 kubelet[3385]: E0904 17:42:31.657202 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.657833 kubelet[3385]: E0904 17:42:31.657432 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.657833 kubelet[3385]: W0904 17:42:31.657442 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.657833 kubelet[3385]: E0904 17:42:31.657458 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.657833 kubelet[3385]: E0904 17:42:31.657629 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.659065 kubelet[3385]: W0904 17:42:31.657639 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.659065 kubelet[3385]: E0904 17:42:31.657655 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.659065 kubelet[3385]: E0904 17:42:31.657829 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.659065 kubelet[3385]: W0904 17:42:31.657837 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.659065 kubelet[3385]: E0904 17:42:31.657852 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.659065 kubelet[3385]: E0904 17:42:31.658018 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.659065 kubelet[3385]: W0904 17:42:31.658027 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.659065 kubelet[3385]: E0904 17:42:31.658042 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.659065 kubelet[3385]: E0904 17:42:31.658206 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.659065 kubelet[3385]: W0904 17:42:31.658215 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.659502 kubelet[3385]: E0904 17:42:31.658229 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.659502 kubelet[3385]: E0904 17:42:31.658418 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.659502 kubelet[3385]: W0904 17:42:31.658427 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.659502 kubelet[3385]: E0904 17:42:31.658443 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.659502 kubelet[3385]: E0904 17:42:31.658620 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.659502 kubelet[3385]: W0904 17:42:31.658630 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.659502 kubelet[3385]: E0904 17:42:31.658645 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.659502 kubelet[3385]: E0904 17:42:31.658814 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.659502 kubelet[3385]: W0904 17:42:31.658822 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.659502 kubelet[3385]: E0904 17:42:31.658838 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.740949 kubelet[3385]: E0904 17:42:31.740709 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.741220 kubelet[3385]: W0904 17:42:31.740962 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.741220 kubelet[3385]: E0904 17:42:31.740995 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.741695 kubelet[3385]: E0904 17:42:31.741340 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.741695 kubelet[3385]: W0904 17:42:31.741354 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.741695 kubelet[3385]: E0904 17:42:31.741376 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.741811 kubelet[3385]: E0904 17:42:31.741801 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.741852 kubelet[3385]: W0904 17:42:31.741813 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.741852 kubelet[3385]: E0904 17:42:31.741843 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.744245 kubelet[3385]: E0904 17:42:31.742128 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.744245 kubelet[3385]: W0904 17:42:31.742141 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.744245 kubelet[3385]: E0904 17:42:31.742358 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.744245 kubelet[3385]: W0904 17:42:31.742367 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.744245 kubelet[3385]: E0904 17:42:31.742416 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.744245 kubelet[3385]: E0904 17:42:31.742168 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.744245 kubelet[3385]: E0904 17:42:31.742648 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.744245 kubelet[3385]: W0904 17:42:31.742656 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.744245 kubelet[3385]: E0904 17:42:31.742809 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.744245 kubelet[3385]: E0904 17:42:31.742869 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.744717 kubelet[3385]: W0904 17:42:31.742876 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.744717 kubelet[3385]: E0904 17:42:31.742962 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.744717 kubelet[3385]: E0904 17:42:31.743093 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.744717 kubelet[3385]: W0904 17:42:31.743101 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.744717 kubelet[3385]: E0904 17:42:31.743121 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.744717 kubelet[3385]: E0904 17:42:31.743316 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.744717 kubelet[3385]: W0904 17:42:31.743325 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.744717 kubelet[3385]: E0904 17:42:31.743367 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.744717 kubelet[3385]: E0904 17:42:31.743591 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.744717 kubelet[3385]: W0904 17:42:31.743600 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.745147 kubelet[3385]: E0904 17:42:31.743620 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.745147 kubelet[3385]: E0904 17:42:31.743995 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.745147 kubelet[3385]: W0904 17:42:31.744004 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.745147 kubelet[3385]: E0904 17:42:31.744021 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.745147 kubelet[3385]: E0904 17:42:31.744251 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.745147 kubelet[3385]: W0904 17:42:31.744295 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.745147 kubelet[3385]: E0904 17:42:31.744319 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.745147 kubelet[3385]: E0904 17:42:31.744646 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.745147 kubelet[3385]: W0904 17:42:31.744656 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.745147 kubelet[3385]: E0904 17:42:31.744681 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.746736 kubelet[3385]: E0904 17:42:31.745062 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.746736 kubelet[3385]: W0904 17:42:31.745072 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.746736 kubelet[3385]: E0904 17:42:31.745112 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.746736 kubelet[3385]: E0904 17:42:31.745448 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.746736 kubelet[3385]: W0904 17:42:31.745459 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.746736 kubelet[3385]: E0904 17:42:31.745479 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.746736 kubelet[3385]: E0904 17:42:31.746054 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.746736 kubelet[3385]: W0904 17:42:31.746064 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.746736 kubelet[3385]: E0904 17:42:31.746174 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.746736 kubelet[3385]: E0904 17:42:31.746447 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.747384 kubelet[3385]: W0904 17:42:31.746477 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.747384 kubelet[3385]: E0904 17:42:31.746501 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.747384 kubelet[3385]: E0904 17:42:31.746743 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:42:31.747384 kubelet[3385]: W0904 17:42:31.746753 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:42:31.747384 kubelet[3385]: E0904 17:42:31.746790 3385 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:42:31.843301 containerd[1826]: time="2024-09-04T17:42:31.843151367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b82zs,Uid:572b8985-122f-4dc9-810a-bc7567e7ec05,Namespace:calico-system,Attempt:0,}" Sep 4 17:42:31.849722 containerd[1826]: time="2024-09-04T17:42:31.849669815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bdd88b995-89t88,Uid:203149ef-a75d-4d10-96c1-cef5d462f78b,Namespace:calico-system,Attempt:0,}" Sep 4 17:42:31.934064 containerd[1826]: time="2024-09-04T17:42:31.932727332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:31.934064 containerd[1826]: time="2024-09-04T17:42:31.932790632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:31.934064 containerd[1826]: time="2024-09-04T17:42:31.932834732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:31.934064 containerd[1826]: time="2024-09-04T17:42:31.932972733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:31.947187 containerd[1826]: time="2024-09-04T17:42:31.946613535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:31.948596 containerd[1826]: time="2024-09-04T17:42:31.947205439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:31.948596 containerd[1826]: time="2024-09-04T17:42:31.947471641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:31.948596 containerd[1826]: time="2024-09-04T17:42:31.947991645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:32.035286 containerd[1826]: time="2024-09-04T17:42:32.035194992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b82zs,Uid:572b8985-122f-4dc9-810a-bc7567e7ec05,Namespace:calico-system,Attempt:0,} returns sandbox id \"173d4eecbb0f81668b7b4318d7d47acda8a8d39bb8981faa7dde0dd69a3e16f6\"" Sep 4 17:42:32.039312 containerd[1826]: time="2024-09-04T17:42:32.039282723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:42:32.072945 containerd[1826]: time="2024-09-04T17:42:32.072909872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bdd88b995-89t88,Uid:203149ef-a75d-4d10-96c1-cef5d462f78b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7054e8fcc6dcd62b97348312c0a22b228d67b33f8f74bbe3c444f564a76a69bd\"" Sep 4 17:42:32.234246 kubelet[3385]: E0904 17:42:32.233180 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:33.557744 containerd[1826]: time="2024-09-04T17:42:33.557699696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:33.560607 containerd[1826]: time="2024-09-04T17:42:33.560550917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:42:33.564061 containerd[1826]: time="2024-09-04T17:42:33.564006243Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:33.569536 containerd[1826]: time="2024-09-04T17:42:33.569503783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:33.570284 containerd[1826]: time="2024-09-04T17:42:33.570183788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.530815065s" Sep 4 17:42:33.570284 containerd[1826]: time="2024-09-04T17:42:33.570229789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:42:33.572275 containerd[1826]: time="2024-09-04T17:42:33.571735800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:42:33.573889 containerd[1826]: time="2024-09-04T17:42:33.573850316Z" level=info msg="CreateContainer within sandbox \"173d4eecbb0f81668b7b4318d7d47acda8a8d39bb8981faa7dde0dd69a3e16f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:42:33.608937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506084767.mount: Deactivated successfully. Sep 4 17:42:33.622537 containerd[1826]: time="2024-09-04T17:42:33.622465277Z" level=info msg="CreateContainer within sandbox \"173d4eecbb0f81668b7b4318d7d47acda8a8d39bb8981faa7dde0dd69a3e16f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d47657232a3c3990a8efba45a43516d289f0db22578905e077bf0ff76cbc4742\"" Sep 4 17:42:33.623784 containerd[1826]: time="2024-09-04T17:42:33.623200482Z" level=info msg="StartContainer for \"d47657232a3c3990a8efba45a43516d289f0db22578905e077bf0ff76cbc4742\"" Sep 4 17:42:33.694245 containerd[1826]: time="2024-09-04T17:42:33.694092108Z" level=info msg="StartContainer for \"d47657232a3c3990a8efba45a43516d289f0db22578905e077bf0ff76cbc4742\" returns successfully" Sep 4 17:42:33.734669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d47657232a3c3990a8efba45a43516d289f0db22578905e077bf0ff76cbc4742-rootfs.mount: Deactivated successfully. Sep 4 17:42:34.475609 kubelet[3385]: E0904 17:42:34.232652 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:34.534183 containerd[1826]: time="2024-09-04T17:42:34.533114474Z" level=info msg="shim disconnected" id=d47657232a3c3990a8efba45a43516d289f0db22578905e077bf0ff76cbc4742 namespace=k8s.io Sep 4 17:42:34.534183 containerd[1826]: time="2024-09-04T17:42:34.533297875Z" level=warning msg="cleaning up after shim disconnected" id=d47657232a3c3990a8efba45a43516d289f0db22578905e077bf0ff76cbc4742 namespace=k8s.io Sep 4 17:42:34.534183 containerd[1826]: time="2024-09-04T17:42:34.533320075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:42:36.233766 kubelet[3385]: E0904 17:42:36.233372 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:36.741336 containerd[1826]: time="2024-09-04T17:42:36.741294065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:36.743968 containerd[1826]: time="2024-09-04T17:42:36.743839584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:42:36.747799 containerd[1826]: time="2024-09-04T17:42:36.747696012Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:36.751910 containerd[1826]: time="2024-09-04T17:42:36.751862844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:36.752695 containerd[1826]: time="2024-09-04T17:42:36.752470948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.180697648s" Sep 4 17:42:36.752695 containerd[1826]: time="2024-09-04T17:42:36.752507448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:42:36.754165 containerd[1826]: time="2024-09-04T17:42:36.753438655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:42:36.774432 containerd[1826]: time="2024-09-04T17:42:36.774108810Z" level=info msg="CreateContainer within sandbox \"7054e8fcc6dcd62b97348312c0a22b228d67b33f8f74bbe3c444f564a76a69bd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:42:36.809291 containerd[1826]: time="2024-09-04T17:42:36.809236472Z" level=info msg="CreateContainer within sandbox \"7054e8fcc6dcd62b97348312c0a22b228d67b33f8f74bbe3c444f564a76a69bd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"85fcdd8cfec7bb4adee0600fdb282f302f8710096c4212aaec5f28bd978d5fba\"" Sep 4 17:42:36.810831 containerd[1826]: time="2024-09-04T17:42:36.810804584Z" level=info msg="StartContainer for \"85fcdd8cfec7bb4adee0600fdb282f302f8710096c4212aaec5f28bd978d5fba\"" Sep 4 17:42:36.892668 containerd[1826]: time="2024-09-04T17:42:36.892623695Z" level=info msg="StartContainer for \"85fcdd8cfec7bb4adee0600fdb282f302f8710096c4212aaec5f28bd978d5fba\" returns successfully" Sep 4 17:42:37.367788 kubelet[3385]: I0904 17:42:37.367753 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7bdd88b995-89t88" podStartSLOduration=2.689031473 podCreationTimestamp="2024-09-04 17:42:30 +0000 UTC" firstStartedPulling="2024-09-04 17:42:32.074372383 +0000 UTC m=+21.939915908" lastFinishedPulling="2024-09-04 17:42:36.753007952 +0000 UTC m=+26.618551577" observedRunningTime="2024-09-04 17:42:37.366325032 +0000 UTC m=+27.231868657" watchObservedRunningTime="2024-09-04 17:42:37.367667142 +0000 UTC m=+27.233210767" Sep 4 17:42:38.235290 kubelet[3385]: E0904 17:42:38.234411 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:40.234310 kubelet[3385]: E0904 17:42:40.233959 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:42.011164 containerd[1826]: time="2024-09-04T17:42:42.011121202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:42.013886 containerd[1826]: time="2024-09-04T17:42:42.013727621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:42:42.017402 containerd[1826]: time="2024-09-04T17:42:42.017354648Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:42.021807 containerd[1826]: time="2024-09-04T17:42:42.021758881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:42.022585 containerd[1826]: time="2024-09-04T17:42:42.022478586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.26900573s" Sep 4 17:42:42.022585 containerd[1826]: time="2024-09-04T17:42:42.022512086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:42:42.024529 containerd[1826]: time="2024-09-04T17:42:42.024304499Z" level=info msg="CreateContainer within sandbox \"173d4eecbb0f81668b7b4318d7d47acda8a8d39bb8981faa7dde0dd69a3e16f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:42:42.058757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362002218.mount: Deactivated successfully. Sep 4 17:42:42.071212 containerd[1826]: time="2024-09-04T17:42:42.071178547Z" level=info msg="CreateContainer within sandbox \"173d4eecbb0f81668b7b4318d7d47acda8a8d39bb8981faa7dde0dd69a3e16f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b7287700a9e92e291f811a7e7bb8352cb380c0ef6c40e40ea522759ac6c22ce0\"" Sep 4 17:42:42.071707 containerd[1826]: time="2024-09-04T17:42:42.071660150Z" level=info msg="StartContainer for \"b7287700a9e92e291f811a7e7bb8352cb380c0ef6c40e40ea522759ac6c22ce0\"" Sep 4 17:42:42.133552 containerd[1826]: time="2024-09-04T17:42:42.133446808Z" level=info msg="StartContainer for \"b7287700a9e92e291f811a7e7bb8352cb380c0ef6c40e40ea522759ac6c22ce0\" returns successfully" Sep 4 17:42:42.234559 kubelet[3385]: E0904 17:42:42.233359 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:43.548903 kubelet[3385]: I0904 17:42:43.548877 3385 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:42:43.581580 kubelet[3385]: I0904 17:42:43.575023 3385 topology_manager.go:215] "Topology Admit Handler" podUID="e7e42435-7b56-42d2-be6c-74a6ec2abbc7" podNamespace="kube-system" podName="coredns-5dd5756b68-lfbkl" Sep 4 17:42:43.583860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7287700a9e92e291f811a7e7bb8352cb380c0ef6c40e40ea522759ac6c22ce0-rootfs.mount: Deactivated successfully. Sep 4 17:42:43.591648 kubelet[3385]: I0904 17:42:43.590186 3385 topology_manager.go:215] "Topology Admit Handler" podUID="c47e5638-d358-4a1e-af81-643524e43581" podNamespace="calico-system" podName="calico-kube-controllers-66fdcc69-fvs8v" Sep 4 17:42:43.593511 kubelet[3385]: I0904 17:42:43.592293 3385 topology_manager.go:215] "Topology Admit Handler" podUID="5d044e69-88ac-4382-a02d-ee5526a871c4" podNamespace="kube-system" podName="coredns-5dd5756b68-v25f8" Sep 4 17:42:43.728652 kubelet[3385]: I0904 17:42:43.728559 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47e5638-d358-4a1e-af81-643524e43581-tigera-ca-bundle\") pod \"calico-kube-controllers-66fdcc69-fvs8v\" (UID: \"c47e5638-d358-4a1e-af81-643524e43581\") " pod="calico-system/calico-kube-controllers-66fdcc69-fvs8v" Sep 4 17:42:43.728979 kubelet[3385]: I0904 17:42:43.728745 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d044e69-88ac-4382-a02d-ee5526a871c4-config-volume\") pod \"coredns-5dd5756b68-v25f8\" (UID: \"5d044e69-88ac-4382-a02d-ee5526a871c4\") " pod="kube-system/coredns-5dd5756b68-v25f8" Sep 4 17:42:43.728979 kubelet[3385]: I0904 17:42:43.728881 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2kp\" (UniqueName: \"kubernetes.io/projected/5d044e69-88ac-4382-a02d-ee5526a871c4-kube-api-access-vb2kp\") pod \"coredns-5dd5756b68-v25f8\" (UID: \"5d044e69-88ac-4382-a02d-ee5526a871c4\") " pod="kube-system/coredns-5dd5756b68-v25f8" Sep 4 17:42:43.729157 kubelet[3385]: I0904 17:42:43.729080 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7rd8\" (UniqueName: \"kubernetes.io/projected/c47e5638-d358-4a1e-af81-643524e43581-kube-api-access-k7rd8\") pod \"calico-kube-controllers-66fdcc69-fvs8v\" (UID: \"c47e5638-d358-4a1e-af81-643524e43581\") " pod="calico-system/calico-kube-controllers-66fdcc69-fvs8v" Sep 4 17:42:43.729157 kubelet[3385]: I0904 17:42:43.729126 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7e42435-7b56-42d2-be6c-74a6ec2abbc7-config-volume\") pod \"coredns-5dd5756b68-lfbkl\" (UID: \"e7e42435-7b56-42d2-be6c-74a6ec2abbc7\") " pod="kube-system/coredns-5dd5756b68-lfbkl" Sep 4 17:42:43.729302 kubelet[3385]: I0904 17:42:43.729165 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7gc\" (UniqueName: \"kubernetes.io/projected/e7e42435-7b56-42d2-be6c-74a6ec2abbc7-kube-api-access-2t7gc\") pod \"coredns-5dd5756b68-lfbkl\" (UID: \"e7e42435-7b56-42d2-be6c-74a6ec2abbc7\") " pod="kube-system/coredns-5dd5756b68-lfbkl" Sep 4 17:42:43.890136 containerd[1826]: time="2024-09-04T17:42:43.890029217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lfbkl,Uid:e7e42435-7b56-42d2-be6c-74a6ec2abbc7,Namespace:kube-system,Attempt:0,}" Sep 4 17:42:43.896995 containerd[1826]: time="2024-09-04T17:42:43.896862068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-v25f8,Uid:5d044e69-88ac-4382-a02d-ee5526a871c4,Namespace:kube-system,Attempt:0,}" Sep 4 17:42:43.900536 containerd[1826]: time="2024-09-04T17:42:43.900323893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdcc69-fvs8v,Uid:c47e5638-d358-4a1e-af81-643524e43581,Namespace:calico-system,Attempt:0,}" Sep 4 17:42:44.238089 containerd[1826]: time="2024-09-04T17:42:44.237783193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv6vb,Uid:491350e8-d092-444b-99b6-cdf34980f429,Namespace:calico-system,Attempt:0,}" Sep 4 17:42:45.754808 containerd[1826]: time="2024-09-04T17:42:45.754741727Z" level=info msg="shim disconnected" id=b7287700a9e92e291f811a7e7bb8352cb380c0ef6c40e40ea522759ac6c22ce0 namespace=k8s.io Sep 4 17:42:45.755483 containerd[1826]: time="2024-09-04T17:42:45.754814028Z" level=warning msg="cleaning up after shim disconnected" id=b7287700a9e92e291f811a7e7bb8352cb380c0ef6c40e40ea522759ac6c22ce0 namespace=k8s.io Sep 4 17:42:45.755483 containerd[1826]: time="2024-09-04T17:42:45.754826728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:42:45.958482 containerd[1826]: time="2024-09-04T17:42:45.958016833Z" level=error msg="Failed to destroy network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.958482 containerd[1826]: time="2024-09-04T17:42:45.958361935Z" level=error msg="encountered an error cleaning up failed sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.958482 containerd[1826]: time="2024-09-04T17:42:45.958423636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lfbkl,Uid:e7e42435-7b56-42d2-be6c-74a6ec2abbc7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.959095 kubelet[3385]: E0904 17:42:45.959068 3385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.960090 kubelet[3385]: E0904 17:42:45.959768 3385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lfbkl" Sep 4 17:42:45.961401 kubelet[3385]: E0904 17:42:45.960182 3385 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lfbkl" Sep 4 17:42:45.961401 kubelet[3385]: E0904 17:42:45.960372 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-lfbkl_kube-system(e7e42435-7b56-42d2-be6c-74a6ec2abbc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-lfbkl_kube-system(e7e42435-7b56-42d2-be6c-74a6ec2abbc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lfbkl" podUID="e7e42435-7b56-42d2-be6c-74a6ec2abbc7" Sep 4 17:42:45.965559 containerd[1826]: time="2024-09-04T17:42:45.965512188Z" level=error msg="Failed to destroy network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.966524 containerd[1826]: time="2024-09-04T17:42:45.966252394Z" level=error msg="encountered an error cleaning up failed sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.966650 containerd[1826]: time="2024-09-04T17:42:45.966430795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-v25f8,Uid:5d044e69-88ac-4382-a02d-ee5526a871c4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.967272 kubelet[3385]: E0904 17:42:45.967012 3385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.967272 kubelet[3385]: E0904 17:42:45.967065 3385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-v25f8" Sep 4 17:42:45.967272 kubelet[3385]: E0904 17:42:45.967095 3385 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-v25f8" Sep 4 17:42:45.967481 kubelet[3385]: E0904 17:42:45.967176 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-v25f8_kube-system(5d044e69-88ac-4382-a02d-ee5526a871c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-v25f8_kube-system(5d044e69-88ac-4382-a02d-ee5526a871c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-v25f8" podUID="5d044e69-88ac-4382-a02d-ee5526a871c4" Sep 4 17:42:45.990406 containerd[1826]: time="2024-09-04T17:42:45.990353472Z" level=error msg="Failed to destroy network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.990968 containerd[1826]: time="2024-09-04T17:42:45.990924776Z" level=error msg="encountered an error cleaning up failed sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.991142 containerd[1826]: time="2024-09-04T17:42:45.991114978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdcc69-fvs8v,Uid:c47e5638-d358-4a1e-af81-643524e43581,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.991560 kubelet[3385]: E0904 17:42:45.991530 3385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.991656 kubelet[3385]: E0904 17:42:45.991592 3385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66fdcc69-fvs8v" Sep 4 17:42:45.991656 kubelet[3385]: E0904 17:42:45.991619 3385 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66fdcc69-fvs8v" Sep 4 17:42:45.991743 kubelet[3385]: E0904 17:42:45.991690 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66fdcc69-fvs8v_calico-system(c47e5638-d358-4a1e-af81-643524e43581)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66fdcc69-fvs8v_calico-system(c47e5638-d358-4a1e-af81-643524e43581)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66fdcc69-fvs8v" podUID="c47e5638-d358-4a1e-af81-643524e43581" Sep 4 17:42:45.993754 containerd[1826]: time="2024-09-04T17:42:45.993703797Z" level=error msg="Failed to destroy network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.994277 containerd[1826]: time="2024-09-04T17:42:45.994082000Z" level=error msg="encountered an error cleaning up failed sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.994277 containerd[1826]: time="2024-09-04T17:42:45.994138100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv6vb,Uid:491350e8-d092-444b-99b6-cdf34980f429,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.994527 kubelet[3385]: E0904 17:42:45.994502 3385 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:45.994621 kubelet[3385]: E0904 17:42:45.994553 3385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sv6vb" Sep 4 17:42:45.994621 kubelet[3385]: E0904 17:42:45.994582 3385 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sv6vb" Sep 4 17:42:45.994710 kubelet[3385]: E0904 17:42:45.994653 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sv6vb_calico-system(491350e8-d092-444b-99b6-cdf34980f429)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sv6vb_calico-system(491350e8-d092-444b-99b6-cdf34980f429)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:46.387414 containerd[1826]: time="2024-09-04T17:42:46.386809808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:42:46.389209 kubelet[3385]: I0904 17:42:46.388353 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:42:46.390375 containerd[1826]: time="2024-09-04T17:42:46.389958732Z" level=info msg="StopPodSandbox for \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\"" Sep 4 17:42:46.390375 containerd[1826]: time="2024-09-04T17:42:46.390141833Z" level=info msg="Ensure that sandbox 47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838 in task-service has been cleanup successfully" Sep 4 17:42:46.395281 kubelet[3385]: I0904 17:42:46.395196 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:42:46.398447 containerd[1826]: time="2024-09-04T17:42:46.397793490Z" level=info msg="StopPodSandbox for \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\"" Sep 4 17:42:46.398906 containerd[1826]: time="2024-09-04T17:42:46.398637696Z" level=info msg="Ensure that sandbox e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709 in task-service has been cleanup successfully" Sep 4 17:42:46.401646 kubelet[3385]: I0904 17:42:46.401625 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:42:46.408433 containerd[1826]: time="2024-09-04T17:42:46.408243667Z" level=info msg="StopPodSandbox for \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\"" Sep 4 17:42:46.408988 containerd[1826]: time="2024-09-04T17:42:46.408880972Z" level=info msg="Ensure that sandbox 0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765 in task-service has been cleanup successfully" Sep 4 17:42:46.412299 kubelet[3385]: I0904 17:42:46.412134 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:42:46.413982 containerd[1826]: time="2024-09-04T17:42:46.413631307Z" level=info msg="StopPodSandbox for \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\"" Sep 4 17:42:46.413982 containerd[1826]: time="2024-09-04T17:42:46.413810808Z" level=info msg="Ensure that sandbox 75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56 in task-service has been cleanup successfully" Sep 4 17:42:46.469899 containerd[1826]: time="2024-09-04T17:42:46.469842623Z" level=error msg="StopPodSandbox for \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\" failed" error="failed to destroy network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:46.470877 kubelet[3385]: E0904 17:42:46.470647 3385 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:42:46.470877 kubelet[3385]: E0904 17:42:46.470737 3385 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838"} Sep 4 17:42:46.470877 kubelet[3385]: E0904 17:42:46.470789 3385 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c47e5638-d358-4a1e-af81-643524e43581\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:42:46.470877 kubelet[3385]: E0904 17:42:46.470830 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c47e5638-d358-4a1e-af81-643524e43581\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66fdcc69-fvs8v" podUID="c47e5638-d358-4a1e-af81-643524e43581" Sep 4 17:42:46.475709 containerd[1826]: time="2024-09-04T17:42:46.474240456Z" level=error msg="StopPodSandbox for \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\" failed" error="failed to destroy network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:46.475825 kubelet[3385]: E0904 17:42:46.474449 3385 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:42:46.475825 kubelet[3385]: E0904 17:42:46.474483 3385 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56"} Sep 4 17:42:46.475825 kubelet[3385]: E0904 17:42:46.474529 3385 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7e42435-7b56-42d2-be6c-74a6ec2abbc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:42:46.475825 kubelet[3385]: E0904 17:42:46.474563 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7e42435-7b56-42d2-be6c-74a6ec2abbc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lfbkl" podUID="e7e42435-7b56-42d2-be6c-74a6ec2abbc7" Sep 4 17:42:46.479138 containerd[1826]: time="2024-09-04T17:42:46.479094192Z" level=error msg="StopPodSandbox for \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\" failed" error="failed to destroy network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:46.479505 kubelet[3385]: E0904 17:42:46.479484 3385 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:42:46.479602 kubelet[3385]: E0904 17:42:46.479516 3385 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765"} Sep 4 17:42:46.479602 kubelet[3385]: E0904 17:42:46.479580 3385 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d044e69-88ac-4382-a02d-ee5526a871c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:42:46.479602 kubelet[3385]: E0904 17:42:46.479612 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d044e69-88ac-4382-a02d-ee5526a871c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-v25f8" podUID="5d044e69-88ac-4382-a02d-ee5526a871c4" Sep 4 17:42:46.482389 containerd[1826]: time="2024-09-04T17:42:46.482358116Z" level=error msg="StopPodSandbox for \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\" failed" error="failed to destroy network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:42:46.482542 kubelet[3385]: E0904 17:42:46.482532 3385 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:42:46.482621 kubelet[3385]: E0904 17:42:46.482563 3385 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709"} Sep 4 17:42:46.482621 kubelet[3385]: E0904 17:42:46.482605 3385 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"491350e8-d092-444b-99b6-cdf34980f429\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:42:46.482734 kubelet[3385]: E0904 17:42:46.482649 3385 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"491350e8-d092-444b-99b6-cdf34980f429\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sv6vb" podUID="491350e8-d092-444b-99b6-cdf34980f429" Sep 4 17:42:46.813174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838-shm.mount: Deactivated successfully. Sep 4 17:42:46.813377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56-shm.mount: Deactivated successfully. Sep 4 17:42:46.813512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765-shm.mount: Deactivated successfully. Sep 4 17:42:54.240973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922364790.mount: Deactivated successfully. Sep 4 17:42:54.284194 containerd[1826]: time="2024-09-04T17:42:54.284143890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:54.286026 containerd[1826]: time="2024-09-04T17:42:54.285980502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:42:54.292764 containerd[1826]: time="2024-09-04T17:42:54.292712447Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:54.296852 containerd[1826]: time="2024-09-04T17:42:54.296790175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:42:54.297536 containerd[1826]: time="2024-09-04T17:42:54.297361679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 7.91048517s" Sep 4 17:42:54.297536 containerd[1826]: time="2024-09-04T17:42:54.297399679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:42:54.316810 containerd[1826]: time="2024-09-04T17:42:54.316674609Z" level=info msg="CreateContainer within sandbox \"173d4eecbb0f81668b7b4318d7d47acda8a8d39bb8981faa7dde0dd69a3e16f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:42:54.361892 containerd[1826]: time="2024-09-04T17:42:54.361854814Z" level=info msg="CreateContainer within sandbox \"173d4eecbb0f81668b7b4318d7d47acda8a8d39bb8981faa7dde0dd69a3e16f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d1d2d0a1b34ad973193d34ed2deb86cf102dc2e6acb956fb87b16356592925ae\"" Sep 4 17:42:54.363700 containerd[1826]: time="2024-09-04T17:42:54.362405618Z" level=info msg="StartContainer for \"d1d2d0a1b34ad973193d34ed2deb86cf102dc2e6acb956fb87b16356592925ae\"" Sep 4 17:42:54.416287 containerd[1826]: time="2024-09-04T17:42:54.415638877Z" level=info msg="StartContainer for \"d1d2d0a1b34ad973193d34ed2deb86cf102dc2e6acb956fb87b16356592925ae\" returns successfully" Sep 4 17:42:54.466828 kubelet[3385]: I0904 17:42:54.466788 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-b82zs" podStartSLOduration=2.207293955 podCreationTimestamp="2024-09-04 17:42:30 +0000 UTC" firstStartedPulling="2024-09-04 17:42:32.038380216 +0000 UTC m=+21.903923741" lastFinishedPulling="2024-09-04 17:42:54.297812682 +0000 UTC m=+44.163356207" observedRunningTime="2024-09-04 17:42:54.461843489 +0000 UTC m=+44.327387014" watchObservedRunningTime="2024-09-04 17:42:54.466726421 +0000 UTC m=+44.332270046" Sep 4 17:42:54.825054 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:42:54.825187 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:42:55.463981 systemd[1]: run-containerd-runc-k8s.io-d1d2d0a1b34ad973193d34ed2deb86cf102dc2e6acb956fb87b16356592925ae-runc.x24LQp.mount: Deactivated successfully. Sep 4 17:42:56.490373 kernel: bpftool[4609]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:42:56.792521 systemd-networkd[1395]: vxlan.calico: Link UP Sep 4 17:42:56.792534 systemd-networkd[1395]: vxlan.calico: Gained carrier Sep 4 17:42:58.234356 containerd[1826]: time="2024-09-04T17:42:58.233947966Z" level=info msg="StopPodSandbox for \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\"" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.271 [INFO][4697] k8s.go 608: Cleaning up netns ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.272 [INFO][4697] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" iface="eth0" netns="/var/run/netns/cni-40e2d72c-c420-7d9d-51db-078234ae0453" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.273 [INFO][4697] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" iface="eth0" netns="/var/run/netns/cni-40e2d72c-c420-7d9d-51db-078234ae0453" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.273 [INFO][4697] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" iface="eth0" netns="/var/run/netns/cni-40e2d72c-c420-7d9d-51db-078234ae0453" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.273 [INFO][4697] k8s.go 615: Releasing IP address(es) ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.273 [INFO][4697] utils.go 188: Calico CNI releasing IP address ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.291 [INFO][4703] ipam_plugin.go 417: Releasing address using handleID ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.291 [INFO][4703] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.292 [INFO][4703] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.297 [WARNING][4703] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.297 [INFO][4703] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.298 [INFO][4703] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:58.301919 containerd[1826]: 2024-09-04 17:42:58.300 [INFO][4697] k8s.go 621: Teardown processing complete. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:42:58.304329 containerd[1826]: time="2024-09-04T17:42:58.302073657Z" level=info msg="TearDown network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\" successfully" Sep 4 17:42:58.304329 containerd[1826]: time="2024-09-04T17:42:58.302113457Z" level=info msg="StopPodSandbox for \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\" returns successfully" Sep 4 17:42:58.304329 containerd[1826]: time="2024-09-04T17:42:58.303135665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-v25f8,Uid:5d044e69-88ac-4382-a02d-ee5526a871c4,Namespace:kube-system,Attempt:1,}" Sep 4 17:42:58.306850 systemd[1]: run-netns-cni\x2d40e2d72c\x2dc420\x2d7d9d\x2d51db\x2d078234ae0453.mount: Deactivated successfully. Sep 4 17:42:58.354066 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Sep 4 17:42:58.430411 systemd-networkd[1395]: calic7da4d637fd: Link UP Sep 4 17:42:58.430649 systemd-networkd[1395]: calic7da4d637fd: Gained carrier Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.373 [INFO][4709] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0 coredns-5dd5756b68- kube-system 5d044e69-88ac-4382-a02d-ee5526a871c4 726 0 2024-09-04 17:42:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-6fd622a1a5 coredns-5dd5756b68-v25f8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic7da4d637fd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Namespace="kube-system" Pod="coredns-5dd5756b68-v25f8" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.373 [INFO][4709] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Namespace="kube-system" Pod="coredns-5dd5756b68-v25f8" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.397 [INFO][4721] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" HandleID="k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.404 [INFO][4721] ipam_plugin.go 270: Auto assigning IP ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" HandleID="k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a760), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-6fd622a1a5", "pod":"coredns-5dd5756b68-v25f8", "timestamp":"2024-09-04 17:42:58.397193842 +0000 UTC"}, Hostname:"ci-4054.1.0-a-6fd622a1a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.404 [INFO][4721] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.404 [INFO][4721] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.404 [INFO][4721] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-6fd622a1a5' Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.406 [INFO][4721] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.410 [INFO][4721] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.413 [INFO][4721] ipam.go 489: Trying affinity for 192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.415 [INFO][4721] ipam.go 155: Attempting to load block cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.416 [INFO][4721] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.416 [INFO][4721] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.0/26 handle="k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.418 [INFO][4721] ipam.go 1685: Creating new handle: k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.420 [INFO][4721] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.0/26 handle="k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.424 [INFO][4721] ipam.go 1216: Successfully claimed IPs: [192.168.88.1/26] block=192.168.88.0/26 handle="k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.424 [INFO][4721] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.1/26] handle="k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.424 [INFO][4721] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:58.449238 containerd[1826]: 2024-09-04 17:42:58.424 [INFO][4721] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.1/26] IPv6=[] ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" HandleID="k8s-pod-network.234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.452025 containerd[1826]: 2024-09-04 17:42:58.426 [INFO][4709] k8s.go 386: Populated endpoint ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Namespace="kube-system" Pod="coredns-5dd5756b68-v25f8" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5d044e69-88ac-4382-a02d-ee5526a871c4", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"", Pod:"coredns-5dd5756b68-v25f8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7da4d637fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:58.452025 containerd[1826]: 2024-09-04 17:42:58.426 [INFO][4709] k8s.go 387: Calico CNI using IPs: [192.168.88.1/32] ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Namespace="kube-system" Pod="coredns-5dd5756b68-v25f8" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.452025 containerd[1826]: 2024-09-04 17:42:58.426 [INFO][4709] dataplane_linux.go 68: Setting the host side veth name to calic7da4d637fd ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Namespace="kube-system" Pod="coredns-5dd5756b68-v25f8" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.452025 containerd[1826]: 2024-09-04 17:42:58.430 [INFO][4709] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Namespace="kube-system" Pod="coredns-5dd5756b68-v25f8" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.452025 containerd[1826]: 2024-09-04 17:42:58.431 [INFO][4709] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Namespace="kube-system" Pod="coredns-5dd5756b68-v25f8" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5d044e69-88ac-4382-a02d-ee5526a871c4", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d", Pod:"coredns-5dd5756b68-v25f8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7da4d637fd", MAC:"ee:bb:de:5b:dc:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:58.452025 containerd[1826]: 2024-09-04 17:42:58.446 [INFO][4709] k8s.go 500: Wrote updated endpoint to datastore ContainerID="234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d" Namespace="kube-system" Pod="coredns-5dd5756b68-v25f8" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:42:58.475955 containerd[1826]: time="2024-09-04T17:42:58.475866508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:58.475955 containerd[1826]: time="2024-09-04T17:42:58.475912709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:58.475955 containerd[1826]: time="2024-09-04T17:42:58.475932409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:58.476251 containerd[1826]: time="2024-09-04T17:42:58.476029309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:58.534345 containerd[1826]: time="2024-09-04T17:42:58.534318429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-v25f8,Uid:5d044e69-88ac-4382-a02d-ee5526a871c4,Namespace:kube-system,Attempt:1,} returns sandbox id \"234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d\"" Sep 4 17:42:58.537487 containerd[1826]: time="2024-09-04T17:42:58.537349751Z" level=info msg="CreateContainer within sandbox \"234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:42:58.574887 containerd[1826]: time="2024-09-04T17:42:58.574846221Z" level=info msg="CreateContainer within sandbox \"234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dadde38df3c93d2dbecbaecf6b748a297539177aaf7b9a8ae37599158370fb69\"" Sep 4 17:42:58.576227 containerd[1826]: time="2024-09-04T17:42:58.575382925Z" level=info msg="StartContainer for \"dadde38df3c93d2dbecbaecf6b748a297539177aaf7b9a8ae37599158370fb69\"" Sep 4 17:42:58.623336 containerd[1826]: time="2024-09-04T17:42:58.623298170Z" level=info msg="StartContainer for \"dadde38df3c93d2dbecbaecf6b748a297539177aaf7b9a8ae37599158370fb69\" returns successfully" Sep 4 17:42:59.233471 containerd[1826]: time="2024-09-04T17:42:59.233433863Z" level=info msg="StopPodSandbox for \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\"" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.274 [INFO][4829] k8s.go 608: Cleaning up netns ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.274 [INFO][4829] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" iface="eth0" netns="/var/run/netns/cni-36855529-cd37-96c6-92a4-85281fbfc6e0" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.275 [INFO][4829] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" iface="eth0" netns="/var/run/netns/cni-36855529-cd37-96c6-92a4-85281fbfc6e0" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.275 [INFO][4829] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" iface="eth0" netns="/var/run/netns/cni-36855529-cd37-96c6-92a4-85281fbfc6e0" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.275 [INFO][4829] k8s.go 615: Releasing IP address(es) ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.275 [INFO][4829] utils.go 188: Calico CNI releasing IP address ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.294 [INFO][4835] ipam_plugin.go 417: Releasing address using handleID ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.294 [INFO][4835] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.294 [INFO][4835] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.298 [WARNING][4835] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.298 [INFO][4835] ipam_plugin.go 445: Releasing address using workloadID ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.300 [INFO][4835] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:59.302769 containerd[1826]: 2024-09-04 17:42:59.301 [INFO][4829] k8s.go 621: Teardown processing complete. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:42:59.306172 containerd[1826]: time="2024-09-04T17:42:59.302854563Z" level=info msg="TearDown network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\" successfully" Sep 4 17:42:59.306172 containerd[1826]: time="2024-09-04T17:42:59.302933063Z" level=info msg="StopPodSandbox for \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\" returns successfully" Sep 4 17:42:59.306588 containerd[1826]: time="2024-09-04T17:42:59.306556689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdcc69-fvs8v,Uid:c47e5638-d358-4a1e-af81-643524e43581,Namespace:calico-system,Attempt:1,}" Sep 4 17:42:59.310093 systemd[1]: run-netns-cni\x2d36855529\x2dcd37\x2d96c6\x2d92a4\x2d85281fbfc6e0.mount: Deactivated successfully. Sep 4 17:42:59.426586 systemd-networkd[1395]: cali349234acc35: Link UP Sep 4 17:42:59.427629 systemd-networkd[1395]: cali349234acc35: Gained carrier Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.372 [INFO][4842] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0 calico-kube-controllers-66fdcc69- calico-system c47e5638-d358-4a1e-af81-643524e43581 737 0 2024-09-04 17:42:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66fdcc69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4054.1.0-a-6fd622a1a5 calico-kube-controllers-66fdcc69-fvs8v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali349234acc35 [] []}} ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Namespace="calico-system" Pod="calico-kube-controllers-66fdcc69-fvs8v" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.372 [INFO][4842] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Namespace="calico-system" Pod="calico-kube-controllers-66fdcc69-fvs8v" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.396 [INFO][4852] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" HandleID="k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.403 [INFO][4852] ipam_plugin.go 270: Auto assigning IP ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" HandleID="k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267e40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-6fd622a1a5", "pod":"calico-kube-controllers-66fdcc69-fvs8v", "timestamp":"2024-09-04 17:42:59.396360436 +0000 UTC"}, Hostname:"ci-4054.1.0-a-6fd622a1a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.403 [INFO][4852] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.403 [INFO][4852] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.403 [INFO][4852] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-6fd622a1a5' Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.404 [INFO][4852] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.407 [INFO][4852] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.411 [INFO][4852] ipam.go 489: Trying affinity for 192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.412 [INFO][4852] ipam.go 155: Attempting to load block cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.414 [INFO][4852] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.414 [INFO][4852] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.0/26 handle="k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.415 [INFO][4852] ipam.go 1685: Creating new handle: k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.418 [INFO][4852] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.0/26 handle="k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.421 [INFO][4852] ipam.go 1216: Successfully claimed IPs: [192.168.88.2/26] block=192.168.88.0/26 handle="k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.421 [INFO][4852] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.2/26] handle="k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.421 [INFO][4852] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:42:59.441322 containerd[1826]: 2024-09-04 17:42:59.421 [INFO][4852] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.2/26] IPv6=[] ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" HandleID="k8s-pod-network.c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.444129 containerd[1826]: 2024-09-04 17:42:59.423 [INFO][4842] k8s.go 386: Populated endpoint ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Namespace="calico-system" Pod="calico-kube-controllers-66fdcc69-fvs8v" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0", GenerateName:"calico-kube-controllers-66fdcc69-", Namespace:"calico-system", SelfLink:"", UID:"c47e5638-d358-4a1e-af81-643524e43581", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fdcc69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"", Pod:"calico-kube-controllers-66fdcc69-fvs8v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali349234acc35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:59.444129 containerd[1826]: 2024-09-04 17:42:59.423 [INFO][4842] k8s.go 387: Calico CNI using IPs: [192.168.88.2/32] ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Namespace="calico-system" Pod="calico-kube-controllers-66fdcc69-fvs8v" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.444129 containerd[1826]: 2024-09-04 17:42:59.424 [INFO][4842] dataplane_linux.go 68: Setting the host side veth name to cali349234acc35 ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Namespace="calico-system" Pod="calico-kube-controllers-66fdcc69-fvs8v" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.444129 containerd[1826]: 2024-09-04 17:42:59.427 [INFO][4842] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Namespace="calico-system" Pod="calico-kube-controllers-66fdcc69-fvs8v" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.444129 containerd[1826]: 2024-09-04 17:42:59.428 [INFO][4842] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Namespace="calico-system" Pod="calico-kube-controllers-66fdcc69-fvs8v" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0", GenerateName:"calico-kube-controllers-66fdcc69-", Namespace:"calico-system", SelfLink:"", UID:"c47e5638-d358-4a1e-af81-643524e43581", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fdcc69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c", Pod:"calico-kube-controllers-66fdcc69-fvs8v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali349234acc35", MAC:"da:97:0b:d7:a1:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:42:59.444129 containerd[1826]: 2024-09-04 17:42:59.440 [INFO][4842] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c" Namespace="calico-system" Pod="calico-kube-controllers-66fdcc69-fvs8v" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:42:59.504355 kubelet[3385]: I0904 17:42:59.502842 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-v25f8" podStartSLOduration=35.502794202 podCreationTimestamp="2024-09-04 17:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:42:59.482697357 +0000 UTC m=+49.348240982" watchObservedRunningTime="2024-09-04 17:42:59.502794202 +0000 UTC m=+49.368337727" Sep 4 17:42:59.526366 containerd[1826]: time="2024-09-04T17:42:59.526164670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:42:59.526366 containerd[1826]: time="2024-09-04T17:42:59.526251571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:42:59.526366 containerd[1826]: time="2024-09-04T17:42:59.526289871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:59.526984 containerd[1826]: time="2024-09-04T17:42:59.526752275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:42:59.593665 containerd[1826]: time="2024-09-04T17:42:59.593621056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fdcc69-fvs8v,Uid:c47e5638-d358-4a1e-af81-643524e43581,Namespace:calico-system,Attempt:1,} returns sandbox id \"c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c\"" Sep 4 17:42:59.595325 containerd[1826]: time="2024-09-04T17:42:59.595295668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:43:00.466412 systemd-networkd[1395]: calic7da4d637fd: Gained IPv6LL Sep 4 17:43:01.105491 systemd-networkd[1395]: cali349234acc35: Gained IPv6LL Sep 4 17:43:01.233771 containerd[1826]: time="2024-09-04T17:43:01.233434263Z" level=info msg="StopPodSandbox for \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\"" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.281 [INFO][4933] k8s.go 608: Cleaning up netns ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.281 [INFO][4933] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" iface="eth0" netns="/var/run/netns/cni-f7ad4f02-f308-9fd5-414b-929cea03c52e" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.282 [INFO][4933] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" iface="eth0" netns="/var/run/netns/cni-f7ad4f02-f308-9fd5-414b-929cea03c52e" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.283 [INFO][4933] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" iface="eth0" netns="/var/run/netns/cni-f7ad4f02-f308-9fd5-414b-929cea03c52e" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.283 [INFO][4933] k8s.go 615: Releasing IP address(es) ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.283 [INFO][4933] utils.go 188: Calico CNI releasing IP address ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.304 [INFO][4940] ipam_plugin.go 417: Releasing address using handleID ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.304 [INFO][4940] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.304 [INFO][4940] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.311 [WARNING][4940] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.311 [INFO][4940] ipam_plugin.go 445: Releasing address using workloadID ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.313 [INFO][4940] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:01.314821 containerd[1826]: 2024-09-04 17:43:01.313 [INFO][4933] k8s.go 621: Teardown processing complete. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:01.316274 containerd[1826]: time="2024-09-04T17:43:01.315869956Z" level=info msg="TearDown network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\" successfully" Sep 4 17:43:01.316274 containerd[1826]: time="2024-09-04T17:43:01.315907357Z" level=info msg="StopPodSandbox for \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\" returns successfully" Sep 4 17:43:01.318567 containerd[1826]: time="2024-09-04T17:43:01.318536876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lfbkl,Uid:e7e42435-7b56-42d2-be6c-74a6ec2abbc7,Namespace:kube-system,Attempt:1,}" Sep 4 17:43:01.319751 systemd[1]: run-netns-cni\x2df7ad4f02\x2df308\x2d9fd5\x2d414b\x2d929cea03c52e.mount: Deactivated successfully. Sep 4 17:43:01.458542 systemd-networkd[1395]: cali474928ef63e: Link UP Sep 4 17:43:01.459084 systemd-networkd[1395]: cali474928ef63e: Gained carrier Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.399 [INFO][4946] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0 coredns-5dd5756b68- kube-system e7e42435-7b56-42d2-be6c-74a6ec2abbc7 754 0 2024-09-04 17:42:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-6fd622a1a5 coredns-5dd5756b68-lfbkl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali474928ef63e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Namespace="kube-system" Pod="coredns-5dd5756b68-lfbkl" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.399 [INFO][4946] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Namespace="kube-system" Pod="coredns-5dd5756b68-lfbkl" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.424 [INFO][4957] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" HandleID="k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.432 [INFO][4957] ipam_plugin.go 270: Auto assigning IP ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" HandleID="k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-6fd622a1a5", "pod":"coredns-5dd5756b68-lfbkl", "timestamp":"2024-09-04 17:43:01.424959542 +0000 UTC"}, Hostname:"ci-4054.1.0-a-6fd622a1a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.432 [INFO][4957] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.432 [INFO][4957] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.432 [INFO][4957] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-6fd622a1a5' Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.433 [INFO][4957] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.437 [INFO][4957] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.442 [INFO][4957] ipam.go 489: Trying affinity for 192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.443 [INFO][4957] ipam.go 155: Attempting to load block cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.445 [INFO][4957] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.445 [INFO][4957] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.0/26 handle="k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.446 [INFO][4957] ipam.go 1685: Creating new handle: k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.449 [INFO][4957] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.0/26 handle="k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.453 [INFO][4957] ipam.go 1216: Successfully claimed IPs: [192.168.88.3/26] block=192.168.88.0/26 handle="k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.453 [INFO][4957] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.3/26] handle="k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.453 [INFO][4957] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:01.478240 containerd[1826]: 2024-09-04 17:43:01.453 [INFO][4957] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.3/26] IPv6=[] ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" HandleID="k8s-pod-network.1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.483348 containerd[1826]: 2024-09-04 17:43:01.455 [INFO][4946] k8s.go 386: Populated endpoint ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Namespace="kube-system" Pod="coredns-5dd5756b68-lfbkl" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e7e42435-7b56-42d2-be6c-74a6ec2abbc7", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"", Pod:"coredns-5dd5756b68-lfbkl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali474928ef63e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:01.483348 containerd[1826]: 2024-09-04 17:43:01.455 [INFO][4946] k8s.go 387: Calico CNI using IPs: [192.168.88.3/32] ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Namespace="kube-system" Pod="coredns-5dd5756b68-lfbkl" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.483348 containerd[1826]: 2024-09-04 17:43:01.455 [INFO][4946] dataplane_linux.go 68: Setting the host side veth name to cali474928ef63e ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Namespace="kube-system" Pod="coredns-5dd5756b68-lfbkl" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.483348 containerd[1826]: 2024-09-04 17:43:01.458 [INFO][4946] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Namespace="kube-system" Pod="coredns-5dd5756b68-lfbkl" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.483348 containerd[1826]: 2024-09-04 17:43:01.460 [INFO][4946] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Namespace="kube-system" Pod="coredns-5dd5756b68-lfbkl" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e7e42435-7b56-42d2-be6c-74a6ec2abbc7", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c", Pod:"coredns-5dd5756b68-lfbkl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali474928ef63e", MAC:"ba:30:32:99:39:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:01.483348 containerd[1826]: 2024-09-04 17:43:01.470 [INFO][4946] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c" Namespace="kube-system" Pod="coredns-5dd5756b68-lfbkl" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:01.756612 containerd[1826]: time="2024-09-04T17:43:01.755093219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:01.756612 containerd[1826]: time="2024-09-04T17:43:01.755211520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:01.756612 containerd[1826]: time="2024-09-04T17:43:01.755233520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:01.756612 containerd[1826]: time="2024-09-04T17:43:01.755413921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:01.910142 containerd[1826]: time="2024-09-04T17:43:01.909540131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lfbkl,Uid:e7e42435-7b56-42d2-be6c-74a6ec2abbc7,Namespace:kube-system,Attempt:1,} returns sandbox id \"1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c\"" Sep 4 17:43:01.914726 containerd[1826]: time="2024-09-04T17:43:01.914507967Z" level=info msg="CreateContainer within sandbox \"1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:43:01.963192 containerd[1826]: time="2024-09-04T17:43:01.962563313Z" level=info msg="CreateContainer within sandbox \"1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f70903ae03d3486e1a046e4ca99f89a65955ba418e01dc03700d79472a81035c\"" Sep 4 17:43:01.967400 containerd[1826]: time="2024-09-04T17:43:01.964722128Z" level=info msg="StartContainer for \"f70903ae03d3486e1a046e4ca99f89a65955ba418e01dc03700d79472a81035c\"" Sep 4 17:43:02.058506 containerd[1826]: time="2024-09-04T17:43:02.057549297Z" level=info msg="StartContainer for \"f70903ae03d3486e1a046e4ca99f89a65955ba418e01dc03700d79472a81035c\" returns successfully" Sep 4 17:43:02.237631 containerd[1826]: time="2024-09-04T17:43:02.236021482Z" level=info msg="StopPodSandbox for \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\"" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.362 [INFO][5084] k8s.go 608: Cleaning up netns ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.363 [INFO][5084] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" iface="eth0" netns="/var/run/netns/cni-c9342a77-9060-ef1f-d7f3-cc3472db8892" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.364 [INFO][5084] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" iface="eth0" netns="/var/run/netns/cni-c9342a77-9060-ef1f-d7f3-cc3472db8892" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.364 [INFO][5084] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" iface="eth0" netns="/var/run/netns/cni-c9342a77-9060-ef1f-d7f3-cc3472db8892" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.364 [INFO][5084] k8s.go 615: Releasing IP address(es) ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.364 [INFO][5084] utils.go 188: Calico CNI releasing IP address ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.410 [INFO][5091] ipam_plugin.go 417: Releasing address using handleID ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.410 [INFO][5091] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.410 [INFO][5091] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.422 [WARNING][5091] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.422 [INFO][5091] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.426 [INFO][5091] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:02.433195 containerd[1826]: 2024-09-04 17:43:02.428 [INFO][5084] k8s.go 621: Teardown processing complete. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:02.439787 containerd[1826]: time="2024-09-04T17:43:02.434336309Z" level=info msg="TearDown network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\" successfully" Sep 4 17:43:02.439787 containerd[1826]: time="2024-09-04T17:43:02.434608711Z" level=info msg="StopPodSandbox for \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\" returns successfully" Sep 4 17:43:02.439787 containerd[1826]: time="2024-09-04T17:43:02.438088836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv6vb,Uid:491350e8-d092-444b-99b6-cdf34980f429,Namespace:calico-system,Attempt:1,}" Sep 4 17:43:02.442883 systemd[1]: run-netns-cni\x2dc9342a77\x2d9060\x2def1f\x2dd7f3\x2dcc3472db8892.mount: Deactivated successfully. Sep 4 17:43:02.497565 kubelet[3385]: I0904 17:43:02.497010 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lfbkl" podStartSLOduration=38.49695956 podCreationTimestamp="2024-09-04 17:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:43:02.496281955 +0000 UTC m=+52.361825580" watchObservedRunningTime="2024-09-04 17:43:02.49695956 +0000 UTC m=+52.362503185" Sep 4 17:43:02.857210 systemd-networkd[1395]: cali9f9e90c9ff3: Link UP Sep 4 17:43:02.859114 systemd-networkd[1395]: cali9f9e90c9ff3: Gained carrier Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.695 [INFO][5098] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0 csi-node-driver- calico-system 491350e8-d092-444b-99b6-cdf34980f429 764 0 2024-09-04 17:42:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4054.1.0-a-6fd622a1a5 csi-node-driver-sv6vb eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9f9e90c9ff3 [] []}} ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Namespace="calico-system" Pod="csi-node-driver-sv6vb" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.696 [INFO][5098] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Namespace="calico-system" Pod="csi-node-driver-sv6vb" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.776 [INFO][5111] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" HandleID="k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.795 [INFO][5111] ipam_plugin.go 270: Auto assigning IP ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" HandleID="k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b66b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-6fd622a1a5", "pod":"csi-node-driver-sv6vb", "timestamp":"2024-09-04 17:43:02.776400772 +0000 UTC"}, Hostname:"ci-4054.1.0-a-6fd622a1a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.795 [INFO][5111] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.795 [INFO][5111] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.795 [INFO][5111] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-6fd622a1a5' Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.797 [INFO][5111] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.803 [INFO][5111] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.811 [INFO][5111] ipam.go 489: Trying affinity for 192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.814 [INFO][5111] ipam.go 155: Attempting to load block cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.818 [INFO][5111] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.818 [INFO][5111] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.0/26 handle="k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.822 [INFO][5111] ipam.go 1685: Creating new handle: k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.828 [INFO][5111] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.0/26 handle="k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.838 [INFO][5111] ipam.go 1216: Successfully claimed IPs: [192.168.88.4/26] block=192.168.88.0/26 handle="k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.838 [INFO][5111] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.4/26] handle="k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.839 [INFO][5111] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:02.881230 containerd[1826]: 2024-09-04 17:43:02.839 [INFO][5111] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.4/26] IPv6=[] ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" HandleID="k8s-pod-network.75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.882781 containerd[1826]: 2024-09-04 17:43:02.844 [INFO][5098] k8s.go 386: Populated endpoint ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Namespace="calico-system" Pod="csi-node-driver-sv6vb" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"491350e8-d092-444b-99b6-cdf34980f429", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"", Pod:"csi-node-driver-sv6vb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9f9e90c9ff3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:02.882781 containerd[1826]: 2024-09-04 17:43:02.844 [INFO][5098] k8s.go 387: Calico CNI using IPs: [192.168.88.4/32] ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Namespace="calico-system" Pod="csi-node-driver-sv6vb" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.882781 containerd[1826]: 2024-09-04 17:43:02.844 [INFO][5098] dataplane_linux.go 68: Setting the host side veth name to cali9f9e90c9ff3 ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Namespace="calico-system" Pod="csi-node-driver-sv6vb" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.882781 containerd[1826]: 2024-09-04 17:43:02.859 [INFO][5098] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Namespace="calico-system" Pod="csi-node-driver-sv6vb" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.882781 containerd[1826]: 2024-09-04 17:43:02.863 [INFO][5098] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Namespace="calico-system" Pod="csi-node-driver-sv6vb" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"491350e8-d092-444b-99b6-cdf34980f429", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e", Pod:"csi-node-driver-sv6vb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9f9e90c9ff3", MAC:"c6:fb:3c:9f:42:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:02.882781 containerd[1826]: 2024-09-04 17:43:02.877 [INFO][5098] k8s.go 500: Wrote updated endpoint to datastore ContainerID="75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e" Namespace="calico-system" Pod="csi-node-driver-sv6vb" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:02.937149 containerd[1826]: time="2024-09-04T17:43:02.935199116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:02.937149 containerd[1826]: time="2024-09-04T17:43:02.935265216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:02.937149 containerd[1826]: time="2024-09-04T17:43:02.935287416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:02.937149 containerd[1826]: time="2024-09-04T17:43:02.935382017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:02.963459 systemd-networkd[1395]: cali474928ef63e: Gained IPv6LL Sep 4 17:43:03.022453 containerd[1826]: time="2024-09-04T17:43:03.022136642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv6vb,Uid:491350e8-d092-444b-99b6-cdf34980f429,Namespace:calico-system,Attempt:1,} returns sandbox id \"75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e\"" Sep 4 17:43:03.355414 containerd[1826]: time="2024-09-04T17:43:03.355359041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:03.358123 containerd[1826]: time="2024-09-04T17:43:03.358076860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:43:03.363038 containerd[1826]: time="2024-09-04T17:43:03.362896595Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:03.367812 containerd[1826]: time="2024-09-04T17:43:03.367718130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:03.369051 containerd[1826]: time="2024-09-04T17:43:03.368590236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.773260368s" Sep 4 17:43:03.369051 containerd[1826]: time="2024-09-04T17:43:03.368625636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:43:03.369721 containerd[1826]: time="2024-09-04T17:43:03.369683544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:43:03.383844 containerd[1826]: time="2024-09-04T17:43:03.383793946Z" level=info msg="CreateContainer within sandbox \"c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:43:03.419477 containerd[1826]: time="2024-09-04T17:43:03.419441102Z" level=info msg="CreateContainer within sandbox \"c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0812535c60a83b71914f14b5dde108a825af01c336def4ed18de504cc085d5a9\"" Sep 4 17:43:03.420277 containerd[1826]: time="2024-09-04T17:43:03.419930406Z" level=info msg="StartContainer for \"0812535c60a83b71914f14b5dde108a825af01c336def4ed18de504cc085d5a9\"" Sep 4 17:43:03.491314 containerd[1826]: time="2024-09-04T17:43:03.490499114Z" level=info msg="StartContainer for \"0812535c60a83b71914f14b5dde108a825af01c336def4ed18de504cc085d5a9\" returns successfully" Sep 4 17:43:03.587245 kubelet[3385]: I0904 17:43:03.587206 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66fdcc69-fvs8v" podStartSLOduration=29.812951235 podCreationTimestamp="2024-09-04 17:42:30 +0000 UTC" firstStartedPulling="2024-09-04 17:42:59.594774064 +0000 UTC m=+49.460317589" lastFinishedPulling="2024-09-04 17:43:03.368980639 +0000 UTC m=+53.234524164" observedRunningTime="2024-09-04 17:43:03.515211992 +0000 UTC m=+53.380755617" watchObservedRunningTime="2024-09-04 17:43:03.58715781 +0000 UTC m=+53.452701335" Sep 4 17:43:03.985420 systemd-networkd[1395]: cali9f9e90c9ff3: Gained IPv6LL Sep 4 17:43:04.904168 containerd[1826]: time="2024-09-04T17:43:04.902307379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:04.905920 containerd[1826]: time="2024-09-04T17:43:04.905864605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:43:04.911934 containerd[1826]: time="2024-09-04T17:43:04.911017142Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:04.917789 containerd[1826]: time="2024-09-04T17:43:04.917758590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:04.919163 containerd[1826]: time="2024-09-04T17:43:04.918765097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.548946852s" Sep 4 17:43:04.919163 containerd[1826]: time="2024-09-04T17:43:04.918800098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:43:04.922231 containerd[1826]: time="2024-09-04T17:43:04.922046621Z" level=info msg="CreateContainer within sandbox \"75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:43:05.001135 containerd[1826]: time="2024-09-04T17:43:05.001076390Z" level=info msg="CreateContainer within sandbox \"75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7f8acae8ae84a755ffa5ad448623998765441090990bab60fd54b6aed16b7881\"" Sep 4 17:43:05.002024 containerd[1826]: time="2024-09-04T17:43:05.001976297Z" level=info msg="StartContainer for \"7f8acae8ae84a755ffa5ad448623998765441090990bab60fd54b6aed16b7881\"" Sep 4 17:43:05.067861 containerd[1826]: time="2024-09-04T17:43:05.067821771Z" level=info msg="StartContainer for \"7f8acae8ae84a755ffa5ad448623998765441090990bab60fd54b6aed16b7881\" returns successfully" Sep 4 17:43:05.069156 containerd[1826]: time="2024-09-04T17:43:05.069126580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:43:07.175447 containerd[1826]: time="2024-09-04T17:43:07.175379793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:07.178145 containerd[1826]: time="2024-09-04T17:43:07.178110013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:43:07.186277 containerd[1826]: time="2024-09-04T17:43:07.183609953Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:07.188675 containerd[1826]: time="2024-09-04T17:43:07.188625789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:07.190154 containerd[1826]: time="2024-09-04T17:43:07.190006899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.120666317s" Sep 4 17:43:07.190154 containerd[1826]: time="2024-09-04T17:43:07.190045799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:43:07.194435 containerd[1826]: time="2024-09-04T17:43:07.194080129Z" level=info msg="CreateContainer within sandbox \"75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:43:07.234981 containerd[1826]: time="2024-09-04T17:43:07.234945924Z" level=info msg="CreateContainer within sandbox \"75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6799ebc911f60f06ddae8a8791595adabcbdfd7d60dbb0990bdadc1c3789a101\"" Sep 4 17:43:07.236185 containerd[1826]: time="2024-09-04T17:43:07.236140433Z" level=info msg="StartContainer for \"6799ebc911f60f06ddae8a8791595adabcbdfd7d60dbb0990bdadc1c3789a101\"" Sep 4 17:43:07.310611 containerd[1826]: time="2024-09-04T17:43:07.310478971Z" level=info msg="StartContainer for \"6799ebc911f60f06ddae8a8791595adabcbdfd7d60dbb0990bdadc1c3789a101\" returns successfully" Sep 4 17:43:07.423912 kubelet[3385]: I0904 17:43:07.423882 3385 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:43:07.423912 kubelet[3385]: I0904 17:43:07.423918 3385 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:43:07.524360 kubelet[3385]: I0904 17:43:07.523937 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-sv6vb" podStartSLOduration=33.357377966 podCreationTimestamp="2024-09-04 17:42:30 +0000 UTC" firstStartedPulling="2024-09-04 17:43:03.024195856 +0000 UTC m=+52.889739381" lastFinishedPulling="2024-09-04 17:43:07.190711204 +0000 UTC m=+57.056254829" observedRunningTime="2024-09-04 17:43:07.523519712 +0000 UTC m=+57.389063237" watchObservedRunningTime="2024-09-04 17:43:07.523893414 +0000 UTC m=+57.389436939" Sep 4 17:43:10.233762 containerd[1826]: time="2024-09-04T17:43:10.233178112Z" level=info msg="StopPodSandbox for \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\"" Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.265 [WARNING][5333] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"491350e8-d092-444b-99b6-cdf34980f429", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e", Pod:"csi-node-driver-sv6vb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9f9e90c9ff3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.265 [INFO][5333] k8s.go 608: Cleaning up netns ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.265 [INFO][5333] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" iface="eth0" netns="" Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.265 [INFO][5333] k8s.go 615: Releasing IP address(es) ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.265 [INFO][5333] utils.go 188: Calico CNI releasing IP address ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.283 [INFO][5339] ipam_plugin.go 417: Releasing address using handleID ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.283 [INFO][5339] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.283 [INFO][5339] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.287 [WARNING][5339] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.288 [INFO][5339] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.289 [INFO][5339] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:10.290995 containerd[1826]: 2024-09-04 17:43:10.290 [INFO][5333] k8s.go 621: Teardown processing complete. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:10.291760 containerd[1826]: time="2024-09-04T17:43:10.291056331Z" level=info msg="TearDown network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\" successfully" Sep 4 17:43:10.291760 containerd[1826]: time="2024-09-04T17:43:10.291105431Z" level=info msg="StopPodSandbox for \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\" returns successfully" Sep 4 17:43:10.292041 containerd[1826]: time="2024-09-04T17:43:10.292009638Z" level=info msg="RemovePodSandbox for \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\"" Sep 4 17:43:10.292123 containerd[1826]: time="2024-09-04T17:43:10.292043938Z" level=info msg="Forcibly stopping sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\"" Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.323 [WARNING][5357] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"491350e8-d092-444b-99b6-cdf34980f429", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"75f67b2982d1b1079c22c5e0480873876820620e4377bbea16a1faca7eb6584e", Pod:"csi-node-driver-sv6vb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9f9e90c9ff3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.323 [INFO][5357] k8s.go 608: Cleaning up netns ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.323 [INFO][5357] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" iface="eth0" netns="" Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.323 [INFO][5357] k8s.go 615: Releasing IP address(es) ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.323 [INFO][5357] utils.go 188: Calico CNI releasing IP address ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.341 [INFO][5363] ipam_plugin.go 417: Releasing address using handleID ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.341 [INFO][5363] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.341 [INFO][5363] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.345 [WARNING][5363] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.346 [INFO][5363] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" HandleID="k8s-pod-network.e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-csi--node--driver--sv6vb-eth0" Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.347 [INFO][5363] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:10.348911 containerd[1826]: 2024-09-04 17:43:10.348 [INFO][5357] k8s.go 621: Teardown processing complete. ContainerID="e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709" Sep 4 17:43:10.349678 containerd[1826]: time="2024-09-04T17:43:10.348971350Z" level=info msg="TearDown network for sandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\" successfully" Sep 4 17:43:10.357433 containerd[1826]: time="2024-09-04T17:43:10.357392311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:43:10.357536 containerd[1826]: time="2024-09-04T17:43:10.357467811Z" level=info msg="RemovePodSandbox \"e5e9fa958172d1675df28191d9e3dda91ec7ed66d7ae40c86b74df23b337e709\" returns successfully" Sep 4 17:43:10.358108 containerd[1826]: time="2024-09-04T17:43:10.358069016Z" level=info msg="StopPodSandbox for \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\"" Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.387 [WARNING][5381] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5d044e69-88ac-4382-a02d-ee5526a871c4", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d", Pod:"coredns-5dd5756b68-v25f8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7da4d637fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.387 [INFO][5381] k8s.go 608: Cleaning up netns ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.387 [INFO][5381] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" iface="eth0" netns="" Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.387 [INFO][5381] k8s.go 615: Releasing IP address(es) ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.387 [INFO][5381] utils.go 188: Calico CNI releasing IP address ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.417 [INFO][5387] ipam_plugin.go 417: Releasing address using handleID ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.417 [INFO][5387] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.417 [INFO][5387] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.423 [WARNING][5387] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.423 [INFO][5387] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.425 [INFO][5387] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:10.427284 containerd[1826]: 2024-09-04 17:43:10.426 [INFO][5381] k8s.go 621: Teardown processing complete. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:43:10.428092 containerd[1826]: time="2024-09-04T17:43:10.427316717Z" level=info msg="TearDown network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\" successfully" Sep 4 17:43:10.428092 containerd[1826]: time="2024-09-04T17:43:10.427345117Z" level=info msg="StopPodSandbox for \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\" returns successfully" Sep 4 17:43:10.428092 containerd[1826]: time="2024-09-04T17:43:10.427808620Z" level=info msg="RemovePodSandbox for \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\"" Sep 4 17:43:10.428092 containerd[1826]: time="2024-09-04T17:43:10.427860620Z" level=info msg="Forcibly stopping sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\"" Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.458 [WARNING][5406] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5d044e69-88ac-4382-a02d-ee5526a871c4", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"234044a6609ea1ca24f8b7c88233fd1427cd924bd6e7c0e57b81ac0f63b8099d", Pod:"coredns-5dd5756b68-v25f8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7da4d637fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.458 [INFO][5406] k8s.go 608: Cleaning up netns ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.458 [INFO][5406] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" iface="eth0" netns="" Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.458 [INFO][5406] k8s.go 615: Releasing IP address(es) ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.458 [INFO][5406] utils.go 188: Calico CNI releasing IP address ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.476 [INFO][5412] ipam_plugin.go 417: Releasing address using handleID ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.476 [INFO][5412] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.476 [INFO][5412] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.480 [WARNING][5412] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.480 [INFO][5412] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" HandleID="k8s-pod-network.0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--v25f8-eth0" Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.482 [INFO][5412] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:10.484044 containerd[1826]: 2024-09-04 17:43:10.482 [INFO][5406] k8s.go 621: Teardown processing complete. ContainerID="0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765" Sep 4 17:43:10.484044 containerd[1826]: time="2024-09-04T17:43:10.483892426Z" level=info msg="TearDown network for sandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\" successfully" Sep 4 17:43:10.492934 containerd[1826]: time="2024-09-04T17:43:10.492889191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:43:10.493128 containerd[1826]: time="2024-09-04T17:43:10.492957191Z" level=info msg="RemovePodSandbox \"0cec7ef0d943d35f0db3bc7c9481ffd0f2b05be56367c1a0d14682f4d16db765\" returns successfully" Sep 4 17:43:10.493565 containerd[1826]: time="2024-09-04T17:43:10.493525095Z" level=info msg="StopPodSandbox for \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\"" Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.527 [WARNING][5430] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e7e42435-7b56-42d2-be6c-74a6ec2abbc7", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c", Pod:"coredns-5dd5756b68-lfbkl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali474928ef63e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.527 [INFO][5430] k8s.go 608: Cleaning up netns ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.527 [INFO][5430] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" iface="eth0" netns="" Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.527 [INFO][5430] k8s.go 615: Releasing IP address(es) ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.527 [INFO][5430] utils.go 188: Calico CNI releasing IP address ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.556 [INFO][5436] ipam_plugin.go 417: Releasing address using handleID ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.556 [INFO][5436] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.556 [INFO][5436] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.565 [WARNING][5436] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.565 [INFO][5436] ipam_plugin.go 445: Releasing address using workloadID ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.569 [INFO][5436] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:10.572382 containerd[1826]: 2024-09-04 17:43:10.570 [INFO][5430] k8s.go 621: Teardown processing complete. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:10.573586 containerd[1826]: time="2024-09-04T17:43:10.572393466Z" level=info msg="TearDown network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\" successfully" Sep 4 17:43:10.573586 containerd[1826]: time="2024-09-04T17:43:10.572418966Z" level=info msg="StopPodSandbox for \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\" returns successfully" Sep 4 17:43:10.573844 containerd[1826]: time="2024-09-04T17:43:10.573499974Z" level=info msg="RemovePodSandbox for \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\"" Sep 4 17:43:10.573844 containerd[1826]: time="2024-09-04T17:43:10.573650175Z" level=info msg="Forcibly stopping sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\"" Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.609 [WARNING][5454] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e7e42435-7b56-42d2-be6c-74a6ec2abbc7", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"1e2db6aef5ba2a0666840050ae6a6707a19755a4b93335c25799abc5bfddde1c", Pod:"coredns-5dd5756b68-lfbkl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali474928ef63e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.609 [INFO][5454] k8s.go 608: Cleaning up netns ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.609 [INFO][5454] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" iface="eth0" netns="" Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.609 [INFO][5454] k8s.go 615: Releasing IP address(es) ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.609 [INFO][5454] utils.go 188: Calico CNI releasing IP address ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.626 [INFO][5460] ipam_plugin.go 417: Releasing address using handleID ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.627 [INFO][5460] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.627 [INFO][5460] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.631 [WARNING][5460] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.631 [INFO][5460] ipam_plugin.go 445: Releasing address using workloadID ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" HandleID="k8s-pod-network.75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-coredns--5dd5756b68--lfbkl-eth0" Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.633 [INFO][5460] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:10.634778 containerd[1826]: 2024-09-04 17:43:10.633 [INFO][5454] k8s.go 621: Teardown processing complete. ContainerID="75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56" Sep 4 17:43:10.634778 containerd[1826]: time="2024-09-04T17:43:10.634753617Z" level=info msg="TearDown network for sandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\" successfully" Sep 4 17:43:10.642663 containerd[1826]: time="2024-09-04T17:43:10.642450373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:43:10.642663 containerd[1826]: time="2024-09-04T17:43:10.642522073Z" level=info msg="RemovePodSandbox \"75af77ee39887a96699449cb63758a2cb73500ea7133e181dc9ba310c7bd8b56\" returns successfully" Sep 4 17:43:10.643061 containerd[1826]: time="2024-09-04T17:43:10.643031977Z" level=info msg="StopPodSandbox for \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\"" Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.672 [WARNING][5478] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0", GenerateName:"calico-kube-controllers-66fdcc69-", Namespace:"calico-system", SelfLink:"", UID:"c47e5638-d358-4a1e-af81-643524e43581", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fdcc69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c", Pod:"calico-kube-controllers-66fdcc69-fvs8v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali349234acc35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.673 [INFO][5478] k8s.go 608: Cleaning up netns ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.673 [INFO][5478] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" iface="eth0" netns="" Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.673 [INFO][5478] k8s.go 615: Releasing IP address(es) ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.673 [INFO][5478] utils.go 188: Calico CNI releasing IP address ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.694 [INFO][5484] ipam_plugin.go 417: Releasing address using handleID ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.694 [INFO][5484] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.694 [INFO][5484] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.699 [WARNING][5484] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.699 [INFO][5484] ipam_plugin.go 445: Releasing address using workloadID ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.700 [INFO][5484] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:10.702155 containerd[1826]: 2024-09-04 17:43:10.701 [INFO][5478] k8s.go 621: Teardown processing complete. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:43:10.702778 containerd[1826]: time="2024-09-04T17:43:10.702196305Z" level=info msg="TearDown network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\" successfully" Sep 4 17:43:10.702778 containerd[1826]: time="2024-09-04T17:43:10.702228705Z" level=info msg="StopPodSandbox for \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\" returns successfully" Sep 4 17:43:10.702778 containerd[1826]: time="2024-09-04T17:43:10.702697409Z" level=info msg="RemovePodSandbox for \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\"" Sep 4 17:43:10.702778 containerd[1826]: time="2024-09-04T17:43:10.702729609Z" level=info msg="Forcibly stopping sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\"" Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.732 [WARNING][5502] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0", GenerateName:"calico-kube-controllers-66fdcc69-", Namespace:"calico-system", SelfLink:"", UID:"c47e5638-d358-4a1e-af81-643524e43581", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 42, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fdcc69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"c1527137f80574222126ede6bcfe518935f3c69dfe324cfe106b7ac67e0a8d8c", Pod:"calico-kube-controllers-66fdcc69-fvs8v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali349234acc35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.732 [INFO][5502] k8s.go 608: Cleaning up netns ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.732 [INFO][5502] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" iface="eth0" netns="" Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.732 [INFO][5502] k8s.go 615: Releasing IP address(es) ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.732 [INFO][5502] utils.go 188: Calico CNI releasing IP address ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.749 [INFO][5508] ipam_plugin.go 417: Releasing address using handleID ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.750 [INFO][5508] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.750 [INFO][5508] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.754 [WARNING][5508] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.754 [INFO][5508] ipam_plugin.go 445: Releasing address using workloadID ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" HandleID="k8s-pod-network.47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--kube--controllers--66fdcc69--fvs8v-eth0" Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.755 [INFO][5508] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:10.757309 containerd[1826]: 2024-09-04 17:43:10.756 [INFO][5502] k8s.go 621: Teardown processing complete. ContainerID="47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838" Sep 4 17:43:10.757309 containerd[1826]: time="2024-09-04T17:43:10.757176403Z" level=info msg="TearDown network for sandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\" successfully" Sep 4 17:43:10.765220 containerd[1826]: time="2024-09-04T17:43:10.765095560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:43:10.765523 containerd[1826]: time="2024-09-04T17:43:10.765292461Z" level=info msg="RemovePodSandbox \"47239e0535d007bb314c21ae0d3bc29630ef91c8fefdd66dcba8c6c44890e838\" returns successfully" Sep 4 17:43:15.324659 kubelet[3385]: I0904 17:43:15.324376 3385 topology_manager.go:215] "Topology Admit Handler" podUID="a9f43663-f169-4008-91e6-8921158c2982" podNamespace="calico-apiserver" podName="calico-apiserver-b955b7974-7c4rk" Sep 4 17:43:15.339131 kubelet[3385]: W0904 17:43:15.337925 3385 reflector.go:535] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4054.1.0-a-6fd622a1a5" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4054.1.0-a-6fd622a1a5' and this object Sep 4 17:43:15.339131 kubelet[3385]: E0904 17:43:15.337966 3385 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4054.1.0-a-6fd622a1a5" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4054.1.0-a-6fd622a1a5' and this object Sep 4 17:43:15.339131 kubelet[3385]: W0904 17:43:15.338031 3385 reflector.go:535] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4054.1.0-a-6fd622a1a5" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4054.1.0-a-6fd622a1a5' and this object Sep 4 17:43:15.339131 kubelet[3385]: E0904 17:43:15.338048 3385 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4054.1.0-a-6fd622a1a5" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4054.1.0-a-6fd622a1a5' and this object Sep 4 17:43:15.343589 kubelet[3385]: I0904 17:43:15.342732 3385 topology_manager.go:215] "Topology Admit Handler" podUID="79e4c59c-6e48-433a-92ae-ff48085386bf" podNamespace="calico-apiserver" podName="calico-apiserver-b955b7974-888wf" Sep 4 17:43:15.428116 kubelet[3385]: I0904 17:43:15.428041 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6crqz\" (UniqueName: \"kubernetes.io/projected/79e4c59c-6e48-433a-92ae-ff48085386bf-kube-api-access-6crqz\") pod \"calico-apiserver-b955b7974-888wf\" (UID: \"79e4c59c-6e48-433a-92ae-ff48085386bf\") " pod="calico-apiserver/calico-apiserver-b955b7974-888wf" Sep 4 17:43:15.428116 kubelet[3385]: I0904 17:43:15.428093 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmt68\" (UniqueName: \"kubernetes.io/projected/a9f43663-f169-4008-91e6-8921158c2982-kube-api-access-vmt68\") pod \"calico-apiserver-b955b7974-7c4rk\" (UID: \"a9f43663-f169-4008-91e6-8921158c2982\") " pod="calico-apiserver/calico-apiserver-b955b7974-7c4rk" Sep 4 17:43:15.428375 kubelet[3385]: I0904 17:43:15.428150 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/79e4c59c-6e48-433a-92ae-ff48085386bf-calico-apiserver-certs\") pod \"calico-apiserver-b955b7974-888wf\" (UID: \"79e4c59c-6e48-433a-92ae-ff48085386bf\") " pod="calico-apiserver/calico-apiserver-b955b7974-888wf" Sep 4 17:43:15.428375 kubelet[3385]: I0904 17:43:15.428218 3385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a9f43663-f169-4008-91e6-8921158c2982-calico-apiserver-certs\") pod \"calico-apiserver-b955b7974-7c4rk\" (UID: \"a9f43663-f169-4008-91e6-8921158c2982\") " pod="calico-apiserver/calico-apiserver-b955b7974-7c4rk" Sep 4 17:43:16.530013 kubelet[3385]: E0904 17:43:16.529934 3385 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:43:16.531184 kubelet[3385]: E0904 17:43:16.530245 3385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79e4c59c-6e48-433a-92ae-ff48085386bf-calico-apiserver-certs podName:79e4c59c-6e48-433a-92ae-ff48085386bf nodeName:}" failed. No retries permitted until 2024-09-04 17:43:17.03012281 +0000 UTC m=+66.895666435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/79e4c59c-6e48-433a-92ae-ff48085386bf-calico-apiserver-certs") pod "calico-apiserver-b955b7974-888wf" (UID: "79e4c59c-6e48-433a-92ae-ff48085386bf") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:43:16.531184 kubelet[3385]: E0904 17:43:16.530954 3385 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:43:16.531184 kubelet[3385]: E0904 17:43:16.531108 3385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9f43663-f169-4008-91e6-8921158c2982-calico-apiserver-certs podName:a9f43663-f169-4008-91e6-8921158c2982 nodeName:}" failed. No retries permitted until 2024-09-04 17:43:17.031057716 +0000 UTC m=+66.896601341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a9f43663-f169-4008-91e6-8921158c2982-calico-apiserver-certs") pod "calico-apiserver-b955b7974-7c4rk" (UID: "a9f43663-f169-4008-91e6-8921158c2982") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:43:17.137822 containerd[1826]: time="2024-09-04T17:43:17.137775116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b955b7974-7c4rk,Uid:a9f43663-f169-4008-91e6-8921158c2982,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:43:17.150569 containerd[1826]: time="2024-09-04T17:43:17.150534308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b955b7974-888wf,Uid:79e4c59c-6e48-433a-92ae-ff48085386bf,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:43:17.314513 systemd-networkd[1395]: cali2a318e2b2d2: Link UP Sep 4 17:43:17.319139 systemd-networkd[1395]: cali2a318e2b2d2: Gained carrier Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.222 [INFO][5588] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0 calico-apiserver-b955b7974- calico-apiserver a9f43663-f169-4008-91e6-8921158c2982 870 0 2024-09-04 17:43:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b955b7974 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-6fd622a1a5 calico-apiserver-b955b7974-7c4rk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2a318e2b2d2 [] []}} ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-7c4rk" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.222 [INFO][5588] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-7c4rk" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.265 [INFO][5610] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" HandleID="k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.281 [INFO][5610] ipam_plugin.go 270: Auto assigning IP ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" HandleID="k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-6fd622a1a5", "pod":"calico-apiserver-b955b7974-7c4rk", "timestamp":"2024-09-04 17:43:17.265715243 +0000 UTC"}, Hostname:"ci-4054.1.0-a-6fd622a1a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.281 [INFO][5610] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.281 [INFO][5610] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.281 [INFO][5610] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-6fd622a1a5' Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.282 [INFO][5610] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.288 [INFO][5610] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.291 [INFO][5610] ipam.go 489: Trying affinity for 192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.292 [INFO][5610] ipam.go 155: Attempting to load block cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.294 [INFO][5610] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.294 [INFO][5610] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.0/26 handle="k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.296 [INFO][5610] ipam.go 1685: Creating new handle: k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.299 [INFO][5610] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.0/26 handle="k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.303 [INFO][5610] ipam.go 1216: Successfully claimed IPs: [192.168.88.5/26] block=192.168.88.0/26 handle="k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.303 [INFO][5610] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.5/26] handle="k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.303 [INFO][5610] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:17.343318 containerd[1826]: 2024-09-04 17:43:17.303 [INFO][5610] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.5/26] IPv6=[] ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" HandleID="k8s-pod-network.309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" Sep 4 17:43:17.347859 containerd[1826]: 2024-09-04 17:43:17.307 [INFO][5588] k8s.go 386: Populated endpoint ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-7c4rk" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0", GenerateName:"calico-apiserver-b955b7974-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9f43663-f169-4008-91e6-8921158c2982", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 43, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b955b7974", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"", Pod:"calico-apiserver-b955b7974-7c4rk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a318e2b2d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:17.347859 containerd[1826]: 2024-09-04 17:43:17.307 [INFO][5588] k8s.go 387: Calico CNI using IPs: [192.168.88.5/32] ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-7c4rk" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" Sep 4 17:43:17.347859 containerd[1826]: 2024-09-04 17:43:17.308 [INFO][5588] dataplane_linux.go 68: Setting the host side veth name to cali2a318e2b2d2 ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-7c4rk" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" Sep 4 17:43:17.347859 containerd[1826]: 2024-09-04 17:43:17.318 [INFO][5588] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-7c4rk" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" Sep 4 17:43:17.347859 containerd[1826]: 2024-09-04 17:43:17.319 [INFO][5588] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-7c4rk" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0", GenerateName:"calico-apiserver-b955b7974-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9f43663-f169-4008-91e6-8921158c2982", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 43, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b955b7974", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd", Pod:"calico-apiserver-b955b7974-7c4rk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a318e2b2d2", MAC:"d2:24:c4:7f:8c:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:17.347859 containerd[1826]: 2024-09-04 17:43:17.336 [INFO][5588] k8s.go 500: Wrote updated endpoint to datastore ContainerID="309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-7c4rk" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--7c4rk-eth0" Sep 4 17:43:17.384199 systemd-networkd[1395]: calie0a78c405ab: Link UP Sep 4 17:43:17.387547 systemd-networkd[1395]: calie0a78c405ab: Gained carrier Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.236 [INFO][5598] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0 calico-apiserver-b955b7974- calico-apiserver 79e4c59c-6e48-433a-92ae-ff48085386bf 873 0 2024-09-04 17:43:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b955b7974 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-6fd622a1a5 calico-apiserver-b955b7974-888wf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie0a78c405ab [] []}} ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-888wf" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.236 [INFO][5598] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-888wf" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.274 [INFO][5615] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" HandleID="k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.281 [INFO][5615] ipam_plugin.go 270: Auto assigning IP ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" HandleID="k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002912c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-6fd622a1a5", "pod":"calico-apiserver-b955b7974-888wf", "timestamp":"2024-09-04 17:43:17.274654708 +0000 UTC"}, Hostname:"ci-4054.1.0-a-6fd622a1a5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.282 [INFO][5615] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.304 [INFO][5615] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.304 [INFO][5615] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-6fd622a1a5' Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.306 [INFO][5615] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.313 [INFO][5615] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.325 [INFO][5615] ipam.go 489: Trying affinity for 192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.329 [INFO][5615] ipam.go 155: Attempting to load block cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.340 [INFO][5615] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.0/26 host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.341 [INFO][5615] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.0/26 handle="k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.349 [INFO][5615] ipam.go 1685: Creating new handle: k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09 Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.356 [INFO][5615] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.0/26 handle="k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.365 [INFO][5615] ipam.go 1216: Successfully claimed IPs: [192.168.88.6/26] block=192.168.88.0/26 handle="k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.365 [INFO][5615] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.6/26] handle="k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" host="ci-4054.1.0-a-6fd622a1a5" Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.365 [INFO][5615] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:43:17.404058 containerd[1826]: 2024-09-04 17:43:17.365 [INFO][5615] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.6/26] IPv6=[] ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" HandleID="k8s-pod-network.d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Workload="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" Sep 4 17:43:17.404969 containerd[1826]: 2024-09-04 17:43:17.373 [INFO][5598] k8s.go 386: Populated endpoint ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-888wf" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0", GenerateName:"calico-apiserver-b955b7974-", Namespace:"calico-apiserver", SelfLink:"", UID:"79e4c59c-6e48-433a-92ae-ff48085386bf", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 43, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b955b7974", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"", Pod:"calico-apiserver-b955b7974-888wf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0a78c405ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:17.404969 containerd[1826]: 2024-09-04 17:43:17.373 [INFO][5598] k8s.go 387: Calico CNI using IPs: [192.168.88.6/32] ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-888wf" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" Sep 4 17:43:17.404969 containerd[1826]: 2024-09-04 17:43:17.374 [INFO][5598] dataplane_linux.go 68: Setting the host side veth name to calie0a78c405ab ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-888wf" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" Sep 4 17:43:17.404969 containerd[1826]: 2024-09-04 17:43:17.387 [INFO][5598] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-888wf" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" Sep 4 17:43:17.404969 containerd[1826]: 2024-09-04 17:43:17.388 [INFO][5598] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-888wf" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0", GenerateName:"calico-apiserver-b955b7974-", Namespace:"calico-apiserver", SelfLink:"", UID:"79e4c59c-6e48-433a-92ae-ff48085386bf", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 43, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b955b7974", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-6fd622a1a5", ContainerID:"d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09", Pod:"calico-apiserver-b955b7974-888wf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0a78c405ab", MAC:"c2:9d:53:a1:32:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:43:17.404969 containerd[1826]: 2024-09-04 17:43:17.398 [INFO][5598] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09" Namespace="calico-apiserver" Pod="calico-apiserver-b955b7974-888wf" WorkloadEndpoint="ci--4054.1.0--a--6fd622a1a5-k8s-calico--apiserver--b955b7974--888wf-eth0" Sep 4 17:43:17.421484 containerd[1826]: time="2024-09-04T17:43:17.420952469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:17.421484 containerd[1826]: time="2024-09-04T17:43:17.421111770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:17.421484 containerd[1826]: time="2024-09-04T17:43:17.421148970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:17.425801 containerd[1826]: time="2024-09-04T17:43:17.421732675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:17.495571 containerd[1826]: time="2024-09-04T17:43:17.495494509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:17.495900 containerd[1826]: time="2024-09-04T17:43:17.495591210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:17.496064 containerd[1826]: time="2024-09-04T17:43:17.495634410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:17.496499 containerd[1826]: time="2024-09-04T17:43:17.496387116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:17.522507 containerd[1826]: time="2024-09-04T17:43:17.522471705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b955b7974-7c4rk,Uid:a9f43663-f169-4008-91e6-8921158c2982,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd\"" Sep 4 17:43:17.531044 containerd[1826]: time="2024-09-04T17:43:17.530705165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:43:17.572317 containerd[1826]: time="2024-09-04T17:43:17.572286566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b955b7974-888wf,Uid:79e4c59c-6e48-433a-92ae-ff48085386bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09\"" Sep 4 17:43:18.705503 systemd-networkd[1395]: cali2a318e2b2d2: Gained IPv6LL Sep 4 17:43:19.345530 systemd-networkd[1395]: calie0a78c405ab: Gained IPv6LL Sep 4 17:43:20.630793 containerd[1826]: time="2024-09-04T17:43:20.630689342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:20.632935 containerd[1826]: time="2024-09-04T17:43:20.632876458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:43:20.636354 containerd[1826]: time="2024-09-04T17:43:20.636302783Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:20.641723 containerd[1826]: time="2024-09-04T17:43:20.641674322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:20.642390 containerd[1826]: time="2024-09-04T17:43:20.642358327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.111601562s" Sep 4 17:43:20.643315 containerd[1826]: time="2024-09-04T17:43:20.642396127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:43:20.644201 containerd[1826]: time="2024-09-04T17:43:20.644175240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:43:20.645145 containerd[1826]: time="2024-09-04T17:43:20.645106847Z" level=info msg="CreateContainer within sandbox \"309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:43:20.683566 containerd[1826]: time="2024-09-04T17:43:20.683477325Z" level=info msg="CreateContainer within sandbox \"309032b09c6d248dc1549c76f32924f30b222c23adecd178a52b02d1cec7e3cd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f58a77fcdfb5dc5704ccf916c856557e6588d3189f940d549d8817c0ed8ae52\"" Sep 4 17:43:20.683512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726965248.mount: Deactivated successfully. Sep 4 17:43:20.686323 containerd[1826]: time="2024-09-04T17:43:20.685105137Z" level=info msg="StartContainer for \"9f58a77fcdfb5dc5704ccf916c856557e6588d3189f940d549d8817c0ed8ae52\"" Sep 4 17:43:20.783154 containerd[1826]: time="2024-09-04T17:43:20.783063147Z" level=info msg="StartContainer for \"9f58a77fcdfb5dc5704ccf916c856557e6588d3189f940d549d8817c0ed8ae52\" returns successfully" Sep 4 17:43:21.005725 containerd[1826]: time="2024-09-04T17:43:21.005021256Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:21.008621 containerd[1826]: time="2024-09-04T17:43:21.008577882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 17:43:21.011604 containerd[1826]: time="2024-09-04T17:43:21.011568404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 366.597958ms" Sep 4 17:43:21.011731 containerd[1826]: time="2024-09-04T17:43:21.011714905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:43:21.016571 containerd[1826]: time="2024-09-04T17:43:21.016540740Z" level=info msg="CreateContainer within sandbox \"d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:43:21.070278 containerd[1826]: time="2024-09-04T17:43:21.069053821Z" level=info msg="CreateContainer within sandbox \"d1571b0bf141c5556f49b712ca26c82218f225d3e7a2e0d4a3c5cf35b78e5f09\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5a91ef000f1d683d37b3161e1659d07ce210f26dc7c2aa6694ad69871c276c61\"" Sep 4 17:43:21.070278 containerd[1826]: time="2024-09-04T17:43:21.069730225Z" level=info msg="StartContainer for \"5a91ef000f1d683d37b3161e1659d07ce210f26dc7c2aa6694ad69871c276c61\"" Sep 4 17:43:21.154128 containerd[1826]: time="2024-09-04T17:43:21.154068137Z" level=info msg="StartContainer for \"5a91ef000f1d683d37b3161e1659d07ce210f26dc7c2aa6694ad69871c276c61\" returns successfully" Sep 4 17:43:21.585642 kubelet[3385]: I0904 17:43:21.582527 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b955b7974-7c4rk" podStartSLOduration=3.465402041 podCreationTimestamp="2024-09-04 17:43:15 +0000 UTC" firstStartedPulling="2024-09-04 17:43:17.525841229 +0000 UTC m=+67.391384754" lastFinishedPulling="2024-09-04 17:43:20.642922331 +0000 UTC m=+70.508465956" observedRunningTime="2024-09-04 17:43:21.582097941 +0000 UTC m=+71.447641466" watchObservedRunningTime="2024-09-04 17:43:21.582483243 +0000 UTC m=+71.448026968" Sep 4 17:43:21.598143 kubelet[3385]: I0904 17:43:21.597935 3385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b955b7974-888wf" podStartSLOduration=3.159308623 podCreationTimestamp="2024-09-04 17:43:15 +0000 UTC" firstStartedPulling="2024-09-04 17:43:17.573483275 +0000 UTC m=+67.439026800" lastFinishedPulling="2024-09-04 17:43:21.012071607 +0000 UTC m=+70.877615132" observedRunningTime="2024-09-04 17:43:21.595052734 +0000 UTC m=+71.460596359" watchObservedRunningTime="2024-09-04 17:43:21.597896955 +0000 UTC m=+71.463440480" Sep 4 17:43:43.640851 systemd[1]: run-containerd-runc-k8s.io-d1d2d0a1b34ad973193d34ed2deb86cf102dc2e6acb956fb87b16356592925ae-runc.dW0pHR.mount: Deactivated successfully. Sep 4 17:44:10.432541 systemd[1]: Started sshd@7-10.200.4.29:22-10.200.16.10:41990.service - OpenSSH per-connection server daemon (10.200.16.10:41990). Sep 4 17:44:11.020294 sshd[5912]: Accepted publickey for core from 10.200.16.10 port 41990 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:11.022116 sshd[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:11.026734 systemd-logind[1798]: New session 10 of user core. Sep 4 17:44:11.033699 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:44:11.500636 sshd[5912]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:11.504797 systemd[1]: sshd@7-10.200.4.29:22-10.200.16.10:41990.service: Deactivated successfully. Sep 4 17:44:11.509384 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:44:11.510961 systemd-logind[1798]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:44:11.511829 systemd-logind[1798]: Removed session 10. Sep 4 17:44:13.926860 systemd[1]: run-containerd-runc-k8s.io-0812535c60a83b71914f14b5dde108a825af01c336def4ed18de504cc085d5a9-runc.7HuZ8k.mount: Deactivated successfully. Sep 4 17:44:16.604673 systemd[1]: Started sshd@8-10.200.4.29:22-10.200.16.10:41992.service - OpenSSH per-connection server daemon (10.200.16.10:41992). Sep 4 17:44:17.195534 sshd[5971]: Accepted publickey for core from 10.200.16.10 port 41992 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:17.197020 sshd[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:17.201735 systemd-logind[1798]: New session 11 of user core. Sep 4 17:44:17.204792 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:44:17.678049 sshd[5971]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:17.681165 systemd[1]: sshd@8-10.200.4.29:22-10.200.16.10:41992.service: Deactivated successfully. Sep 4 17:44:17.686796 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:44:17.688899 systemd-logind[1798]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:44:17.690687 systemd-logind[1798]: Removed session 11. Sep 4 17:44:22.780999 systemd[1]: Started sshd@9-10.200.4.29:22-10.200.16.10:51028.service - OpenSSH per-connection server daemon (10.200.16.10:51028). Sep 4 17:44:23.357505 sshd[6015]: Accepted publickey for core from 10.200.16.10 port 51028 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:23.359465 sshd[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:23.365157 systemd-logind[1798]: New session 12 of user core. Sep 4 17:44:23.371086 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:44:23.824541 sshd[6015]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:23.827933 systemd[1]: sshd@9-10.200.4.29:22-10.200.16.10:51028.service: Deactivated successfully. Sep 4 17:44:23.834366 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:44:23.835590 systemd-logind[1798]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:44:23.836506 systemd-logind[1798]: Removed session 12. Sep 4 17:44:23.936535 systemd[1]: Started sshd@10-10.200.4.29:22-10.200.16.10:51038.service - OpenSSH per-connection server daemon (10.200.16.10:51038). Sep 4 17:44:24.515825 sshd[6030]: Accepted publickey for core from 10.200.16.10 port 51038 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:24.517332 sshd[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:24.521348 systemd-logind[1798]: New session 13 of user core. Sep 4 17:44:24.528089 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:44:25.608563 sshd[6030]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:25.612478 systemd[1]: sshd@10-10.200.4.29:22-10.200.16.10:51038.service: Deactivated successfully. Sep 4 17:44:25.618247 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:44:25.619699 systemd-logind[1798]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:44:25.620816 systemd-logind[1798]: Removed session 13. Sep 4 17:44:25.708567 systemd[1]: Started sshd@11-10.200.4.29:22-10.200.16.10:51040.service - OpenSSH per-connection server daemon (10.200.16.10:51040). Sep 4 17:44:26.283776 sshd[6043]: Accepted publickey for core from 10.200.16.10 port 51040 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:26.285195 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:26.289343 systemd-logind[1798]: New session 14 of user core. Sep 4 17:44:26.296567 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:44:26.750620 sshd[6043]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:26.753658 systemd[1]: sshd@11-10.200.4.29:22-10.200.16.10:51040.service: Deactivated successfully. Sep 4 17:44:26.759109 systemd-logind[1798]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:44:26.759889 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:44:26.761093 systemd-logind[1798]: Removed session 14. Sep 4 17:44:31.852541 systemd[1]: Started sshd@12-10.200.4.29:22-10.200.16.10:35970.service - OpenSSH per-connection server daemon (10.200.16.10:35970). Sep 4 17:44:32.427526 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 35970 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:32.429073 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:32.433650 systemd-logind[1798]: New session 15 of user core. Sep 4 17:44:32.437343 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:44:32.892787 sshd[6073]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:32.896689 systemd[1]: sshd@12-10.200.4.29:22-10.200.16.10:35970.service: Deactivated successfully. Sep 4 17:44:32.902547 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:44:32.903466 systemd-logind[1798]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:44:32.904389 systemd-logind[1798]: Removed session 15. Sep 4 17:44:37.995740 systemd[1]: Started sshd@13-10.200.4.29:22-10.200.16.10:35982.service - OpenSSH per-connection server daemon (10.200.16.10:35982). Sep 4 17:44:38.574149 sshd[6093]: Accepted publickey for core from 10.200.16.10 port 35982 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:38.575722 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:38.580061 systemd-logind[1798]: New session 16 of user core. Sep 4 17:44:38.583748 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:44:39.040896 sshd[6093]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:39.043870 systemd[1]: sshd@13-10.200.4.29:22-10.200.16.10:35982.service: Deactivated successfully. Sep 4 17:44:39.049697 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:44:39.050685 systemd-logind[1798]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:44:39.051627 systemd-logind[1798]: Removed session 16. Sep 4 17:44:44.143550 systemd[1]: Started sshd@14-10.200.4.29:22-10.200.16.10:42384.service - OpenSSH per-connection server daemon (10.200.16.10:42384). Sep 4 17:44:44.728878 sshd[6151]: Accepted publickey for core from 10.200.16.10 port 42384 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:44.730417 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:44.734316 systemd-logind[1798]: New session 17 of user core. Sep 4 17:44:44.741151 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:44:45.253998 sshd[6151]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:45.258946 systemd[1]: sshd@14-10.200.4.29:22-10.200.16.10:42384.service: Deactivated successfully. Sep 4 17:44:45.267490 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:44:45.268806 systemd-logind[1798]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:44:45.270110 systemd-logind[1798]: Removed session 17. Sep 4 17:44:45.355529 systemd[1]: Started sshd@15-10.200.4.29:22-10.200.16.10:42398.service - OpenSSH per-connection server daemon (10.200.16.10:42398). Sep 4 17:44:45.937877 sshd[6164]: Accepted publickey for core from 10.200.16.10 port 42398 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:45.939702 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:45.944401 systemd-logind[1798]: New session 18 of user core. Sep 4 17:44:45.950571 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:44:46.663483 sshd[6164]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:46.667060 systemd[1]: sshd@15-10.200.4.29:22-10.200.16.10:42398.service: Deactivated successfully. Sep 4 17:44:46.673388 systemd-logind[1798]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:44:46.674086 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:44:46.675171 systemd-logind[1798]: Removed session 18. Sep 4 17:44:46.764533 systemd[1]: Started sshd@16-10.200.4.29:22-10.200.16.10:42408.service - OpenSSH per-connection server daemon (10.200.16.10:42408). Sep 4 17:44:47.345178 sshd[6176]: Accepted publickey for core from 10.200.16.10 port 42408 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:47.346887 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:47.352503 systemd-logind[1798]: New session 19 of user core. Sep 4 17:44:47.357775 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:44:48.722271 sshd[6176]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:48.725307 systemd[1]: sshd@16-10.200.4.29:22-10.200.16.10:42408.service: Deactivated successfully. Sep 4 17:44:48.731093 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:44:48.731130 systemd-logind[1798]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:44:48.732699 systemd-logind[1798]: Removed session 19. Sep 4 17:44:48.825819 systemd[1]: Started sshd@17-10.200.4.29:22-10.200.16.10:33842.service - OpenSSH per-connection server daemon (10.200.16.10:33842). Sep 4 17:44:49.409121 sshd[6195]: Accepted publickey for core from 10.200.16.10 port 33842 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:49.410923 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:49.415964 systemd-logind[1798]: New session 20 of user core. Sep 4 17:44:49.422498 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:44:50.074894 sshd[6195]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:50.079100 systemd[1]: sshd@17-10.200.4.29:22-10.200.16.10:33842.service: Deactivated successfully. Sep 4 17:44:50.083314 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:44:50.084141 systemd-logind[1798]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:44:50.085391 systemd-logind[1798]: Removed session 20. Sep 4 17:44:50.177786 systemd[1]: Started sshd@18-10.200.4.29:22-10.200.16.10:33850.service - OpenSSH per-connection server daemon (10.200.16.10:33850). Sep 4 17:44:50.763247 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 33850 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:50.765139 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:50.769679 systemd-logind[1798]: New session 21 of user core. Sep 4 17:44:50.773839 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:44:51.242441 sshd[6207]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:51.246239 systemd[1]: sshd@18-10.200.4.29:22-10.200.16.10:33850.service: Deactivated successfully. Sep 4 17:44:51.251196 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:44:51.252344 systemd-logind[1798]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:44:51.253885 systemd-logind[1798]: Removed session 21. Sep 4 17:44:56.345050 systemd[1]: Started sshd@19-10.200.4.29:22-10.200.16.10:33852.service - OpenSSH per-connection server daemon (10.200.16.10:33852). Sep 4 17:44:56.930491 sshd[6229]: Accepted publickey for core from 10.200.16.10 port 33852 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:44:56.931983 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:56.936692 systemd-logind[1798]: New session 22 of user core. Sep 4 17:44:56.942821 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:44:57.406878 sshd[6229]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:57.412244 systemd[1]: sshd@19-10.200.4.29:22-10.200.16.10:33852.service: Deactivated successfully. Sep 4 17:44:57.415151 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:44:57.416619 systemd-logind[1798]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:44:57.417630 systemd-logind[1798]: Removed session 22. Sep 4 17:45:02.509570 systemd[1]: Started sshd@20-10.200.4.29:22-10.200.16.10:58694.service - OpenSSH per-connection server daemon (10.200.16.10:58694). Sep 4 17:45:03.097919 sshd[6246]: Accepted publickey for core from 10.200.16.10 port 58694 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:45:03.099892 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:45:03.105458 systemd-logind[1798]: New session 23 of user core. Sep 4 17:45:03.109571 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:45:03.564671 sshd[6246]: pam_unix(sshd:session): session closed for user core Sep 4 17:45:03.570137 systemd[1]: sshd@20-10.200.4.29:22-10.200.16.10:58694.service: Deactivated successfully. Sep 4 17:45:03.574937 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:45:03.575819 systemd-logind[1798]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:45:03.576775 systemd-logind[1798]: Removed session 23. Sep 4 17:45:08.668809 systemd[1]: Started sshd@21-10.200.4.29:22-10.200.16.10:46824.service - OpenSSH per-connection server daemon (10.200.16.10:46824). Sep 4 17:45:09.248641 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 46824 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:45:09.250166 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:45:09.254316 systemd-logind[1798]: New session 24 of user core. Sep 4 17:45:09.260620 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:45:09.722287 sshd[6265]: pam_unix(sshd:session): session closed for user core Sep 4 17:45:09.727882 systemd[1]: sshd@21-10.200.4.29:22-10.200.16.10:46824.service: Deactivated successfully. Sep 4 17:45:09.733468 systemd-logind[1798]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:45:09.733723 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:45:09.735142 systemd-logind[1798]: Removed session 24. Sep 4 17:45:14.823569 systemd[1]: Started sshd@22-10.200.4.29:22-10.200.16.10:46830.service - OpenSSH per-connection server daemon (10.200.16.10:46830). Sep 4 17:45:15.405846 sshd[6326]: Accepted publickey for core from 10.200.16.10 port 46830 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:45:15.407713 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:45:15.412418 systemd-logind[1798]: New session 25 of user core. Sep 4 17:45:15.417476 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:45:15.872868 sshd[6326]: pam_unix(sshd:session): session closed for user core Sep 4 17:45:15.877340 systemd[1]: sshd@22-10.200.4.29:22-10.200.16.10:46830.service: Deactivated successfully. Sep 4 17:45:15.881567 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:45:15.882471 systemd-logind[1798]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:45:15.883360 systemd-logind[1798]: Removed session 25. Sep 4 17:45:20.977555 systemd[1]: Started sshd@23-10.200.4.29:22-10.200.16.10:34486.service - OpenSSH per-connection server daemon (10.200.16.10:34486). Sep 4 17:45:21.576744 sshd[6358]: Accepted publickey for core from 10.200.16.10 port 34486 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:45:21.578521 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:45:21.584326 systemd-logind[1798]: New session 26 of user core. Sep 4 17:45:21.589566 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:45:22.046882 sshd[6358]: pam_unix(sshd:session): session closed for user core Sep 4 17:45:22.050924 systemd[1]: sshd@23-10.200.4.29:22-10.200.16.10:34486.service: Deactivated successfully. Sep 4 17:45:22.055002 systemd-logind[1798]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:45:22.055888 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:45:22.057346 systemd-logind[1798]: Removed session 26. Sep 4 17:45:27.148620 systemd[1]: Started sshd@24-10.200.4.29:22-10.200.16.10:34498.service - OpenSSH per-connection server daemon (10.200.16.10:34498). Sep 4 17:45:27.730311 sshd[6380]: Accepted publickey for core from 10.200.16.10 port 34498 ssh2: RSA SHA256:Uyt7oO2EXSFubEx3In16nTY26l+8pigGedEy8tt2QcY Sep 4 17:45:27.732004 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:45:27.736646 systemd-logind[1798]: New session 27 of user core. Sep 4 17:45:27.742517 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:45:28.211054 sshd[6380]: pam_unix(sshd:session): session closed for user core Sep 4 17:45:28.214128 systemd[1]: sshd@24-10.200.4.29:22-10.200.16.10:34498.service: Deactivated successfully. Sep 4 17:45:28.219639 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:45:28.220632 systemd-logind[1798]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:45:28.221556 systemd-logind[1798]: Removed session 27.