Sep 4 17:26:50.068101 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:26:50.068133 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:50.068146 kernel: BIOS-provided physical RAM map: Sep 4 17:26:50.068156 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:26:50.068166 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:26:50.068175 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:26:50.068187 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:26:50.068200 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:26:50.068210 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:26:50.068220 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:26:50.068230 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:26:50.068241 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:26:50.068251 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:26:50.068261 kernel: NX (Execute Disable) protection: active Sep 4 17:26:50.068277 kernel: APIC: Static calls initialized Sep 4 17:26:50.068288 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:26:50.068300 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 4 17:26:50.068311 kernel: SMBIOS 3.1.0 present. Sep 4 17:26:50.068323 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:26:50.068334 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:26:50.068346 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:26:50.068357 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:26:50.068368 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:26:50.068380 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:26:50.068393 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:26:50.068405 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:26:50.068417 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:26:50.068429 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:26:50.068441 kernel: tsc: Detected 2593.906 MHz processor Sep 4 17:26:50.068453 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:26:50.068465 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:26:50.068477 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:26:50.068488 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:26:50.068502 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:26:50.068514 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:26:50.068525 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:26:50.068556 kernel: Using GB pages for direct mapping Sep 4 17:26:50.068568 kernel: Secure boot disabled Sep 4 17:26:50.068580 kernel: ACPI: Early table checksum verification disabled Sep 4 17:26:50.068592 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:26:50.068609 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068623 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068635 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:26:50.068648 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:26:50.068661 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068673 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068686 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068701 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068713 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068726 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068739 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068750 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:26:50.068761 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:26:50.068774 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:26:50.068787 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:26:50.068803 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:26:50.068816 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:26:50.068830 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:26:50.068843 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:26:50.068857 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:26:50.068870 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:26:50.068884 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:26:50.068897 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:26:50.068910 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:26:50.068927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:26:50.068940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:26:50.068953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:26:50.068965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:26:50.068978 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:26:50.068990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:26:50.069002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:26:50.069015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:26:50.069028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:26:50.069042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:26:50.069054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:26:50.069066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:26:50.069078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:26:50.069090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:26:50.069103 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:26:50.069115 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:26:50.069128 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:26:50.069140 kernel: Zone ranges: Sep 4 17:26:50.069155 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:26:50.069167 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:26:50.069180 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:26:50.069192 kernel: Movable zone start for each node Sep 4 17:26:50.069204 kernel: Early memory node ranges Sep 4 17:26:50.069217 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:26:50.069229 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:26:50.069241 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:26:50.069254 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:26:50.069268 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:26:50.069281 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:26:50.069293 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:26:50.069306 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:26:50.069319 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:26:50.069331 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:26:50.069344 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:26:50.069356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:26:50.069369 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:26:50.069384 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:26:50.069396 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:26:50.069408 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:26:50.069419 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:26:50.069432 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:26:50.069445 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:26:50.069457 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:26:50.069470 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:26:50.069482 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:26:50.069496 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:26:50.069508 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:26:50.069523 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:50.069554 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:26:50.069565 kernel: random: crng init done Sep 4 17:26:50.069575 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:26:50.069586 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:26:50.069599 kernel: Fallback order for Node 0: 0 Sep 4 17:26:50.069616 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:26:50.069637 kernel: Policy zone: Normal Sep 4 17:26:50.069652 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:26:50.069664 kernel: software IO TLB: area num 2. Sep 4 17:26:50.069679 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 316268K reserved, 0K cma-reserved) Sep 4 17:26:50.069693 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:26:50.069706 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:26:50.069719 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:26:50.069732 kernel: Dynamic Preempt: voluntary Sep 4 17:26:50.069743 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:26:50.069757 kernel: rcu: RCU event tracing is enabled. Sep 4 17:26:50.069773 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:26:50.069787 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:26:50.069801 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:26:50.069814 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:26:50.069828 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:26:50.069843 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:26:50.069855 kernel: Using NULL legacy PIC Sep 4 17:26:50.069868 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:26:50.069879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:26:50.069887 kernel: Console: colour dummy device 80x25 Sep 4 17:26:50.069899 kernel: printk: console [tty1] enabled Sep 4 17:26:50.069912 kernel: printk: console [ttyS0] enabled Sep 4 17:26:50.069925 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:26:50.069937 kernel: ACPI: Core revision 20230628 Sep 4 17:26:50.069951 kernel: Failed to register legacy timer interrupt Sep 4 17:26:50.069967 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:26:50.069981 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:26:50.069994 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:26:50.070008 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:26:50.070023 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:26:50.070036 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:26:50.070051 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:26:50.070065 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:26:50.070079 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:26:50.070096 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Sep 4 17:26:50.070110 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:26:50.070124 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:26:50.070138 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:26:50.070152 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:26:50.070166 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:26:50.070180 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:26:50.070194 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:26:50.070208 kernel: RETBleed: Vulnerable Sep 4 17:26:50.070225 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:26:50.070237 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:26:50.070250 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:26:50.070263 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:26:50.070275 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:26:50.070287 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:26:50.070302 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:26:50.070314 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:26:50.070326 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:26:50.070339 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:26:50.070352 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:26:50.070370 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:26:50.070384 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:26:50.070399 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:26:50.070414 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:26:50.070427 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:26:50.070440 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:26:50.070460 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:26:50.070472 kernel: SELinux: Initializing. Sep 4 17:26:50.070485 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:26:50.072565 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:26:50.072580 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:26:50.072591 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:50.072602 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:50.072613 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:50.072621 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:26:50.072630 kernel: signal: max sigframe size: 3632 Sep 4 17:26:50.072640 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:26:50.072648 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:26:50.072656 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:26:50.072666 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:26:50.072674 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:26:50.072684 kernel: .... node #0, CPUs: #1 Sep 4 17:26:50.072692 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:26:50.072703 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:26:50.072711 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:26:50.072719 kernel: smpboot: Max logical packages: 1 Sep 4 17:26:50.072730 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 4 17:26:50.072737 kernel: devtmpfs: initialized Sep 4 17:26:50.072746 kernel: x86/mm: Memory block size: 128MB Sep 4 17:26:50.072758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:26:50.072766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:26:50.072777 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:26:50.072785 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:26:50.072792 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:26:50.072803 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:26:50.072811 kernel: audit: type=2000 audit(1725470809.028:1): state=initialized audit_enabled=0 res=1 Sep 4 17:26:50.072819 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:26:50.072829 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:26:50.072839 kernel: cpuidle: using governor menu Sep 4 17:26:50.072848 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:26:50.072857 kernel: dca service started, version 1.12.1 Sep 4 17:26:50.072867 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:26:50.072875 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:26:50.072883 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:26:50.072894 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:26:50.072902 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:26:50.072910 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:26:50.072921 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:26:50.072929 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:26:50.072939 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:26:50.072947 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:26:50.072955 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:26:50.072965 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:26:50.072973 kernel: ACPI: Interpreter enabled Sep 4 17:26:50.072981 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:26:50.072991 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:26:50.073001 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:26:50.073010 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:26:50.073019 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:26:50.073027 kernel: iommu: Default domain type: Translated Sep 4 17:26:50.073036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:26:50.073045 kernel: efivars: Registered efivars operations Sep 4 17:26:50.073053 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:26:50.073063 kernel: PCI: System does not support PCI Sep 4 17:26:50.073071 kernel: vgaarb: loaded Sep 4 17:26:50.073080 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:26:50.073091 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:26:50.073099 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:26:50.073107 kernel: pnp: PnP ACPI init Sep 4 17:26:50.073117 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:26:50.073125 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:26:50.073134 kernel: NET: Registered PF_INET protocol family Sep 4 17:26:50.073143 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:26:50.073151 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:26:50.073163 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:26:50.073171 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:26:50.073179 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:26:50.073189 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:26:50.073197 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:26:50.073205 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:26:50.073215 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:26:50.073223 kernel: NET: Registered PF_XDP protocol family Sep 4 17:26:50.073231 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:26:50.073243 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:26:50.073250 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 4 17:26:50.073259 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:26:50.073269 kernel: Initialise system trusted keyrings Sep 4 17:26:50.073277 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:26:50.073287 kernel: Key type asymmetric registered Sep 4 17:26:50.073295 kernel: Asymmetric key parser 'x509' registered Sep 4 17:26:50.073304 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:26:50.073313 kernel: io scheduler mq-deadline registered Sep 4 17:26:50.073322 kernel: io scheduler kyber registered Sep 4 17:26:50.073333 kernel: io scheduler bfq registered Sep 4 17:26:50.073340 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:26:50.073349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:26:50.073359 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:26:50.073367 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:26:50.073376 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:26:50.073505 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:26:50.073605 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:26:49 UTC (1725470809) Sep 4 17:26:50.073698 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:26:50.073709 kernel: intel_pstate: CPU model not supported Sep 4 17:26:50.073717 kernel: efifb: probing for efifb Sep 4 17:26:50.073725 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:26:50.073733 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:26:50.073741 kernel: efifb: scrolling: redraw Sep 4 17:26:50.073752 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:26:50.073762 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:26:50.073770 kernel: fb0: EFI VGA frame buffer device Sep 4 17:26:50.073778 kernel: pstore: Using crash dump compression: deflate Sep 4 17:26:50.073789 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:26:50.073802 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:26:50.073817 kernel: Segment Routing with IPv6 Sep 4 17:26:50.073837 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:26:50.073855 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:26:50.073873 kernel: Key type dns_resolver registered Sep 4 17:26:50.073890 kernel: IPI shorthand broadcast: enabled Sep 4 17:26:50.073914 kernel: sched_clock: Marking stable (854003000, 50264200)->(1125997400, -221730200) Sep 4 17:26:50.073931 kernel: registered taskstats version 1 Sep 4 17:26:50.073947 kernel: Loading compiled-in X.509 certificates Sep 4 17:26:50.073962 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:26:50.073976 kernel: Key type .fscrypt registered Sep 4 17:26:50.073991 kernel: Key type fscrypt-provisioning registered Sep 4 17:26:50.074007 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:26:50.074023 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:26:50.074051 kernel: ima: No architecture policies found Sep 4 17:26:50.074070 kernel: clk: Disabling unused clocks Sep 4 17:26:50.074090 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:26:50.074108 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:26:50.074124 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:26:50.074141 kernel: Run /init as init process Sep 4 17:26:50.074156 kernel: with arguments: Sep 4 17:26:50.074177 kernel: /init Sep 4 17:26:50.074192 kernel: with environment: Sep 4 17:26:50.074210 kernel: HOME=/ Sep 4 17:26:50.074224 kernel: TERM=linux Sep 4 17:26:50.074240 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:26:50.074258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:26:50.074276 systemd[1]: Detected virtualization microsoft. Sep 4 17:26:50.074294 systemd[1]: Detected architecture x86-64. Sep 4 17:26:50.074310 systemd[1]: Running in initrd. Sep 4 17:26:50.074330 systemd[1]: No hostname configured, using default hostname. Sep 4 17:26:50.074350 systemd[1]: Hostname set to . Sep 4 17:26:50.074368 systemd[1]: Initializing machine ID from random generator. Sep 4 17:26:50.074387 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:26:50.074406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:26:50.074422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:26:50.074442 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:26:50.074461 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:26:50.074479 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:26:50.074499 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:26:50.074520 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:26:50.076563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:26:50.076588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:26:50.076605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:26:50.076619 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:26:50.076634 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:26:50.076652 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:26:50.076667 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:26:50.076681 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:26:50.076695 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:26:50.076710 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:26:50.076725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:26:50.076739 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:26:50.076754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:26:50.076771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:26:50.076786 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:26:50.076800 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:26:50.076814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:26:50.076829 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:26:50.076843 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:26:50.076857 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:26:50.076872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:26:50.076909 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:26:50.076943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:50.076958 systemd-journald[176]: Journal started Sep 4 17:26:50.076990 systemd-journald[176]: Runtime Journal (/run/log/journal/cdcf31ce63d44806837ad5e55c1b7c7b) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:26:50.084643 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:26:50.084992 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:26:50.086167 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:26:50.086596 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:26:50.107957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:26:50.115671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:26:50.124106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:50.129991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:26:50.139715 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:50.146710 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:26:50.157080 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:26:50.163743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:26:50.177065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:26:50.199736 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:50.211167 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:26:50.211202 kernel: Bridge firewalling registered Sep 4 17:26:50.211104 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:26:50.211901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:26:50.222681 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:26:50.231694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:26:50.237515 dracut-cmdline[207]: dracut-dracut-053 Sep 4 17:26:50.237515 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:50.267415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:26:50.280054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:26:50.324021 systemd-resolved[257]: Positive Trust Anchors: Sep 4 17:26:50.325386 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:26:50.325428 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:26:50.329656 systemd-resolved[257]: Defaulting to hostname 'linux'. Sep 4 17:26:50.330610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:26:50.355764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:26:50.370558 kernel: SCSI subsystem initialized Sep 4 17:26:50.381550 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:26:50.393558 kernel: iscsi: registered transport (tcp) Sep 4 17:26:50.419067 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:26:50.419116 kernel: QLogic iSCSI HBA Driver Sep 4 17:26:50.453851 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:26:50.463710 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:26:50.494078 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:26:50.494143 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:26:50.498289 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:26:50.542557 kernel: raid6: avx512x4 gen() 18084 MB/s Sep 4 17:26:50.561544 kernel: raid6: avx512x2 gen() 18312 MB/s Sep 4 17:26:50.580545 kernel: raid6: avx512x1 gen() 18364 MB/s Sep 4 17:26:50.600549 kernel: raid6: avx2x4 gen() 18294 MB/s Sep 4 17:26:50.619544 kernel: raid6: avx2x2 gen() 18254 MB/s Sep 4 17:26:50.639802 kernel: raid6: avx2x1 gen() 13895 MB/s Sep 4 17:26:50.639834 kernel: raid6: using algorithm avx512x1 gen() 18364 MB/s Sep 4 17:26:50.661544 kernel: raid6: .... xor() 25850 MB/s, rmw enabled Sep 4 17:26:50.661575 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:26:50.688559 kernel: xor: automatically using best checksumming function avx Sep 4 17:26:50.849562 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:26:50.858691 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:26:50.869927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:26:50.881209 systemd-udevd[395]: Using default interface naming scheme 'v255'. Sep 4 17:26:50.885452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:26:50.905200 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:26:50.916314 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 4 17:26:50.942967 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:26:50.955989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:26:50.998350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:26:51.013707 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:26:51.040110 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:26:51.049810 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:26:51.057202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:26:51.060867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:26:51.076912 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:26:51.091556 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:26:51.105187 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:26:51.124104 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:26:51.124157 kernel: AES CTR mode by8 optimization enabled Sep 4 17:26:51.135993 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:26:51.131463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:26:51.131700 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:51.142042 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:51.148662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:26:51.148920 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:51.152142 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:51.180759 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:26:51.180798 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:26:51.181552 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:26:51.186390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:51.192830 kernel: scsi host0: storvsc_host_t Sep 4 17:26:51.193026 kernel: scsi host1: storvsc_host_t Sep 4 17:26:51.200036 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:26:51.201517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:26:51.208710 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:26:51.202819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:51.220775 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:26:51.220816 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:26:51.229800 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:51.234026 kernel: PTP clock support registered Sep 4 17:26:51.255551 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:26:51.257810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:51.281123 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:51.878840 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:26:51.878873 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:26:51.878884 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:26:51.878898 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:26:51.878908 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:26:51.878921 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:26:51.868295 systemd-resolved[257]: Clock change detected. Flushing caches. Sep 4 17:26:51.890773 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Sep 4 17:26:51.891062 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:26:51.895874 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:26:51.902086 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:26:51.901435 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:51.911292 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:26:51.911471 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:26:51.924816 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: VF slot 1 added Sep 4 17:26:51.925144 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: VF slot 1 removed Sep 4 17:26:51.934874 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:26:51.949910 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:26:51.950212 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:26:51.957606 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 4 17:26:51.957881 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:26:51.958106 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:26:51.965417 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:26:51.965447 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 4 17:26:53.098752 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:26:53.144883 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Sep 4 17:26:53.160430 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:26:53.172506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:26:53.290873 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (449) Sep 4 17:26:53.304708 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:26:53.308461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:26:53.327022 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:26:53.339905 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:26:53.347867 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:26:53.482889 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: VF slot 1 added Sep 4 17:26:53.491764 kernel: hv_pci 21a323a9-527d-457c-9c89-74fa2763ddc9: PCI VMBus probing: Using version 0x10004 Sep 4 17:26:53.491932 kernel: hv_pci 21a323a9-527d-457c-9c89-74fa2763ddc9: PCI host bridge to bus 527d:00 Sep 4 17:26:53.498899 kernel: pci_bus 527d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:26:53.499056 kernel: pci_bus 527d:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:26:53.516868 kernel: pci 527d:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:26:53.516927 kernel: pci 527d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:26:53.516958 kernel: pci 527d:00:02.0: enabling Extended Tags Sep 4 17:26:53.531452 kernel: pci 527d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 527d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:26:53.543032 kernel: pci_bus 527d:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:26:53.543217 kernel: pci 527d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:26:53.746672 kernel: mlx5_core 527d:00:02.0: enabling device (0000 -> 0002) Sep 4 17:26:53.751866 kernel: mlx5_core 527d:00:02.0: firmware version: 14.30.1284 Sep 4 17:26:53.976463 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: VF registering: eth1 Sep 4 17:26:53.976774 kernel: mlx5_core 527d:00:02.0 eth1: joined to eth0 Sep 4 17:26:53.977970 kernel: mlx5_core 527d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:26:53.994910 kernel: mlx5_core 527d:00:02.0 enP21117s1: renamed from eth1 Sep 4 17:26:54.355914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:26:54.356383 disk-uuid[593]: The operation has completed successfully. Sep 4 17:26:54.430373 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:26:54.430476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:26:54.456247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:26:54.462484 sh[688]: Success Sep 4 17:26:54.511869 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:26:54.936904 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:26:54.954758 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:26:54.960386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:26:54.975451 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:26:54.975506 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:54.979141 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:26:54.981995 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:26:54.984436 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:26:55.739363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:26:55.743219 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:26:55.753374 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:26:55.760036 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:26:55.775581 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:55.775628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:55.778395 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:26:55.836507 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:26:55.851864 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:26:55.852492 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:26:55.866493 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:26:55.873067 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:55.877381 systemd-networkd[862]: lo: Link UP Sep 4 17:26:55.877400 systemd-networkd[862]: lo: Gained carrier Sep 4 17:26:55.880351 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:26:55.882756 systemd-networkd[862]: Enumeration completed Sep 4 17:26:55.883611 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:26:55.884368 systemd-networkd[862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:55.884371 systemd-networkd[862]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:26:55.888457 systemd[1]: Reached target network.target - Network. Sep 4 17:26:55.914005 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:26:55.945865 kernel: mlx5_core 527d:00:02.0 enP21117s1: Link up Sep 4 17:26:55.982864 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: Data path switched to VF: enP21117s1 Sep 4 17:26:55.983144 systemd-networkd[862]: enP21117s1: Link UP Sep 4 17:26:55.983274 systemd-networkd[862]: eth0: Link UP Sep 4 17:26:55.983420 systemd-networkd[862]: eth0: Gained carrier Sep 4 17:26:55.983430 systemd-networkd[862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:55.994885 systemd-networkd[862]: enP21117s1: Gained carrier Sep 4 17:26:56.024952 systemd-networkd[862]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:26:57.043203 systemd-networkd[862]: eth0: Gained IPv6LL Sep 4 17:26:57.531001 ignition[873]: Ignition 2.18.0 Sep 4 17:26:57.531011 ignition[873]: Stage: fetch-offline Sep 4 17:26:57.531055 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:57.531066 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:26:57.531232 ignition[873]: parsed url from cmdline: "" Sep 4 17:26:57.531237 ignition[873]: no config URL provided Sep 4 17:26:57.531245 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:26:57.531256 ignition[873]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:26:57.531267 ignition[873]: failed to fetch config: resource requires networking Sep 4 17:26:57.531447 ignition[873]: Ignition finished successfully Sep 4 17:26:57.551721 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:26:57.563195 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:26:57.580937 ignition[881]: Ignition 2.18.0 Sep 4 17:26:57.580946 ignition[881]: Stage: fetch Sep 4 17:26:57.581155 ignition[881]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:57.581165 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:26:57.581246 ignition[881]: parsed url from cmdline: "" Sep 4 17:26:57.581251 ignition[881]: no config URL provided Sep 4 17:26:57.581257 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:26:57.581265 ignition[881]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:26:57.581299 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:26:57.670155 ignition[881]: GET result: OK Sep 4 17:26:57.670358 ignition[881]: config has been read from IMDS userdata Sep 4 17:26:57.672036 ignition[881]: parsing config with SHA512: c5e8186fc376dfe47d89f02bf24427a7f6b58c5103f6f9fb59c3e79333ae3a24488aac155d6cb5db58b688b14ef5106d246f597e2e36dcfe012fe729e59bab67 Sep 4 17:26:57.677778 unknown[881]: fetched base config from "system" Sep 4 17:26:57.677804 unknown[881]: fetched base config from "system" Sep 4 17:26:57.679891 ignition[881]: fetch: fetch complete Sep 4 17:26:57.677821 unknown[881]: fetched user config from "azure" Sep 4 17:26:57.679899 ignition[881]: fetch: fetch passed Sep 4 17:26:57.685372 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:26:57.679956 ignition[881]: Ignition finished successfully Sep 4 17:26:57.696572 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:26:57.711599 ignition[888]: Ignition 2.18.0 Sep 4 17:26:57.711623 ignition[888]: Stage: kargs Sep 4 17:26:57.711818 ignition[888]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:57.711828 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:26:57.714192 ignition[888]: kargs: kargs passed Sep 4 17:26:57.717550 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:26:57.714230 ignition[888]: Ignition finished successfully Sep 4 17:26:57.730037 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:26:57.744605 ignition[895]: Ignition 2.18.0 Sep 4 17:26:57.744615 ignition[895]: Stage: disks Sep 4 17:26:57.744838 ignition[895]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:57.744865 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:26:57.748412 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:26:57.745699 ignition[895]: disks: disks passed Sep 4 17:26:57.745737 ignition[895]: Ignition finished successfully Sep 4 17:26:57.762279 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:26:57.765396 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:26:57.774305 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:26:57.777039 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:26:57.782344 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:26:57.796319 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:26:57.811966 systemd-networkd[862]: enP21117s1: Gained IPv6LL Sep 4 17:26:57.876601 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:26:57.885779 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:26:57.897939 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:26:57.997727 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:26:58.004985 kernel: EXT4-fs (sda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:26:58.000570 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:26:58.081930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:26:58.088186 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:26:58.090893 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:26:58.098977 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:26:58.100504 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:26:58.115017 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:26:58.123170 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (915) Sep 4 17:26:58.131278 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:58.131324 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:58.133327 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:26:58.133560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:26:58.140259 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:26:58.144153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:26:59.041028 coreos-metadata[917]: Sep 04 17:26:59.040 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:26:59.047862 coreos-metadata[917]: Sep 04 17:26:59.047 INFO Fetch successful Sep 4 17:26:59.051623 coreos-metadata[917]: Sep 04 17:26:59.051 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:26:59.068176 coreos-metadata[917]: Sep 04 17:26:59.068 INFO Fetch successful Sep 4 17:26:59.076114 coreos-metadata[917]: Sep 04 17:26:59.076 INFO wrote hostname ci-3975.2.1-a-1f7e34d344 to /sysroot/etc/hostname Sep 4 17:26:59.077579 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:26:59.292411 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:26:59.332464 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:26:59.339833 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:26:59.346689 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:27:00.402381 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:27:00.413096 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:27:00.421986 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:27:00.427497 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:27:00.430982 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:27:00.455160 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:27:00.460894 ignition[1037]: INFO : Ignition 2.18.0 Sep 4 17:27:00.460894 ignition[1037]: INFO : Stage: mount Sep 4 17:27:00.468766 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:27:00.468766 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:27:00.468766 ignition[1037]: INFO : mount: mount passed Sep 4 17:27:00.468766 ignition[1037]: INFO : Ignition finished successfully Sep 4 17:27:00.464603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:27:00.481451 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:27:00.489209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:27:00.505861 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1050) Sep 4 17:27:00.505898 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:27:00.509866 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:27:00.514609 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:27:00.519864 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:27:00.520933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:27:00.546622 ignition[1066]: INFO : Ignition 2.18.0 Sep 4 17:27:00.546622 ignition[1066]: INFO : Stage: files Sep 4 17:27:00.551610 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:27:00.551610 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:27:00.551610 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:27:00.551610 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:27:00.551610 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:27:00.823004 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:27:00.828471 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:27:00.832728 unknown[1066]: wrote ssh authorized keys file for user: core Sep 4 17:27:00.835386 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:27:00.835386 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:27:00.835386 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:27:00.942310 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:27:01.037427 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:27:01.044106 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:27:01.044106 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:27:01.044106 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:27:01.058280 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:27:01.058280 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:27:01.067734 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:27:01.072370 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:27:01.515384 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:27:01.845468 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:27:01.845468 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: files passed Sep 4 17:27:01.860256 ignition[1066]: INFO : Ignition finished successfully Sep 4 17:27:01.853603 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:27:01.878729 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:27:01.883010 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:27:01.894259 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:27:01.918611 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:27:01.918611 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:27:01.894340 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:27:01.920202 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:27:01.908654 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:27:01.917159 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:27:01.951991 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:27:01.980191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:27:01.980293 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:27:01.986786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:27:01.995750 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:27:01.998665 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:27:02.008032 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:27:02.022194 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:27:02.037969 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:27:02.049126 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:27:02.050801 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:27:02.051693 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:27:02.052094 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:27:02.052219 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:27:02.052992 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:27:02.053441 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:27:02.053857 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:27:02.054293 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:27:02.055449 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:27:02.055844 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:27:02.056240 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:27:02.056669 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:27:02.057119 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:27:02.057507 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:27:02.057909 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:27:02.058032 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:27:02.058788 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:27:02.059678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:27:02.060061 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:27:02.097535 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:27:02.103416 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:27:02.103548 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:27:02.109626 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:27:02.109763 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:27:02.189099 ignition[1120]: INFO : Ignition 2.18.0 Sep 4 17:27:02.189099 ignition[1120]: INFO : Stage: umount Sep 4 17:27:02.189099 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:27:02.189099 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:27:02.189099 ignition[1120]: INFO : umount: umount passed Sep 4 17:27:02.189099 ignition[1120]: INFO : Ignition finished successfully Sep 4 17:27:02.120636 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:27:02.120747 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:27:02.123482 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:27:02.123605 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:27:02.157152 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:27:02.159742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:27:02.159933 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:27:02.177454 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:27:02.192248 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:27:02.192368 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:27:02.192782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:27:02.192893 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:27:02.195616 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:27:02.195695 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:27:02.231585 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:27:02.231671 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:27:02.236998 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:27:02.237046 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:27:02.237513 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:27:02.237546 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:27:02.240316 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:27:02.240352 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:27:02.240747 systemd[1]: Stopped target network.target - Network. Sep 4 17:27:02.241182 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:27:02.241215 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:27:02.241656 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:27:02.243920 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:27:02.299166 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:27:02.309032 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:27:02.313806 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:27:02.318710 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:27:02.318767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:27:02.323603 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:27:02.323645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:27:02.328278 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:27:02.328336 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:27:02.333812 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:27:02.333866 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:27:02.339387 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:27:02.346729 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:27:02.355192 systemd-networkd[862]: eth0: DHCPv6 lease lost Sep 4 17:27:02.363328 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:27:02.366135 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:27:02.368581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:27:02.374841 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:27:02.374951 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:27:02.389929 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:27:02.394965 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:27:02.395024 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:27:02.398450 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:27:02.399572 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:27:02.400245 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:27:02.409316 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:27:02.409406 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:27:02.417195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:27:02.417617 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:27:02.419160 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:27:02.419203 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:27:02.440824 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:27:02.440964 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:27:02.447923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:27:02.447998 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:27:02.467777 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:27:02.476751 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: Data path switched from VF: enP21117s1 Sep 4 17:27:02.467819 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:27:02.476746 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:27:02.476806 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:27:02.487331 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:27:02.487392 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:27:02.493523 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:27:02.493569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:27:02.507357 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:27:02.510489 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:27:02.510541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:27:02.516928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:27:02.516966 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:27:02.520519 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:27:02.520612 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:27:02.527714 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:27:02.527791 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:27:02.843204 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:27:02.843359 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:27:02.848900 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:27:02.856793 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:27:02.856869 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:27:02.869024 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:27:03.296638 systemd[1]: Switching root. Sep 4 17:27:03.349995 systemd-journald[176]: Journal stopped Sep 4 17:26:50.068101 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:26:50.068133 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:50.068146 kernel: BIOS-provided physical RAM map: Sep 4 17:26:50.068156 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:26:50.068166 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 17:26:50.068175 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 4 17:26:50.068187 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 4 17:26:50.068200 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 4 17:26:50.068210 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 17:26:50.068220 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 17:26:50.068230 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 17:26:50.068241 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 17:26:50.068251 kernel: printk: bootconsole [earlyser0] enabled Sep 4 17:26:50.068261 kernel: NX (Execute Disable) protection: active Sep 4 17:26:50.068277 kernel: APIC: Static calls initialized Sep 4 17:26:50.068288 kernel: efi: EFI v2.7 by Microsoft Sep 4 17:26:50.068300 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 4 17:26:50.068311 kernel: SMBIOS 3.1.0 present. Sep 4 17:26:50.068323 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 4 17:26:50.068334 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 17:26:50.068346 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 4 17:26:50.068357 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Sep 4 17:26:50.068368 kernel: Hyper-V: Nested features: 0x1e0101 Sep 4 17:26:50.068380 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 17:26:50.068393 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 17:26:50.068405 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:26:50.068417 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 17:26:50.068429 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 4 17:26:50.068441 kernel: tsc: Detected 2593.906 MHz processor Sep 4 17:26:50.068453 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:26:50.068465 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:26:50.068477 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 4 17:26:50.068488 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:26:50.068502 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:26:50.068514 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 4 17:26:50.068525 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 4 17:26:50.068556 kernel: Using GB pages for direct mapping Sep 4 17:26:50.068568 kernel: Secure boot disabled Sep 4 17:26:50.068580 kernel: ACPI: Early table checksum verification disabled Sep 4 17:26:50.068592 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 17:26:50.068609 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068623 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068635 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:26:50.068648 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 17:26:50.068661 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068673 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068686 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068701 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068713 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068726 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068739 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:26:50.068750 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 17:26:50.068761 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 4 17:26:50.068774 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 17:26:50.068787 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 17:26:50.068803 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 17:26:50.068816 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 17:26:50.068830 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 4 17:26:50.068843 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 4 17:26:50.068857 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 17:26:50.068870 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 4 17:26:50.068884 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:26:50.068897 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:26:50.068910 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 4 17:26:50.068927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 4 17:26:50.068940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 4 17:26:50.068953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 4 17:26:50.068965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 4 17:26:50.068978 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 4 17:26:50.068990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 4 17:26:50.069002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 4 17:26:50.069015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 4 17:26:50.069028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 4 17:26:50.069042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 4 17:26:50.069054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 4 17:26:50.069066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 4 17:26:50.069078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 4 17:26:50.069090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 4 17:26:50.069103 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 4 17:26:50.069115 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 4 17:26:50.069128 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 4 17:26:50.069140 kernel: Zone ranges: Sep 4 17:26:50.069155 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:26:50.069167 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:26:50.069180 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:26:50.069192 kernel: Movable zone start for each node Sep 4 17:26:50.069204 kernel: Early memory node ranges Sep 4 17:26:50.069217 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:26:50.069229 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 4 17:26:50.069241 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 17:26:50.069254 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 17:26:50.069268 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 17:26:50.069281 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:26:50.069293 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:26:50.069306 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 4 17:26:50.069319 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 17:26:50.069331 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 4 17:26:50.069344 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:26:50.069356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:26:50.069369 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:26:50.069384 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 17:26:50.069396 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:26:50.069408 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 17:26:50.069419 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 17:26:50.069432 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:26:50.069445 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:26:50.069457 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:26:50.069470 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:26:50.069482 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:26:50.069496 kernel: Hyper-V: PV spinlocks enabled Sep 4 17:26:50.069508 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:26:50.069523 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:50.069554 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:26:50.069565 kernel: random: crng init done Sep 4 17:26:50.069575 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:26:50.069586 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:26:50.069599 kernel: Fallback order for Node 0: 0 Sep 4 17:26:50.069616 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 4 17:26:50.069637 kernel: Policy zone: Normal Sep 4 17:26:50.069652 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:26:50.069664 kernel: software IO TLB: area num 2. Sep 4 17:26:50.069679 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 316268K reserved, 0K cma-reserved) Sep 4 17:26:50.069693 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:26:50.069706 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:26:50.069719 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:26:50.069732 kernel: Dynamic Preempt: voluntary Sep 4 17:26:50.069743 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:26:50.069757 kernel: rcu: RCU event tracing is enabled. Sep 4 17:26:50.069773 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:26:50.069787 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:26:50.069801 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:26:50.069814 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:26:50.069828 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:26:50.069843 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:26:50.069855 kernel: Using NULL legacy PIC Sep 4 17:26:50.069868 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 17:26:50.069879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:26:50.069887 kernel: Console: colour dummy device 80x25 Sep 4 17:26:50.069899 kernel: printk: console [tty1] enabled Sep 4 17:26:50.069912 kernel: printk: console [ttyS0] enabled Sep 4 17:26:50.069925 kernel: printk: bootconsole [earlyser0] disabled Sep 4 17:26:50.069937 kernel: ACPI: Core revision 20230628 Sep 4 17:26:50.069951 kernel: Failed to register legacy timer interrupt Sep 4 17:26:50.069967 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:26:50.069981 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:26:50.069994 kernel: Hyper-V: Using IPI hypercalls Sep 4 17:26:50.070008 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 17:26:50.070023 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 17:26:50.070036 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 17:26:50.070051 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 17:26:50.070065 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 17:26:50.070079 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 17:26:50.070096 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Sep 4 17:26:50.070110 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:26:50.070124 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:26:50.070138 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:26:50.070152 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:26:50.070166 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:26:50.070180 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:26:50.070194 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:26:50.070208 kernel: RETBleed: Vulnerable Sep 4 17:26:50.070225 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:26:50.070237 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:26:50.070250 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:26:50.070263 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:26:50.070275 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:26:50.070287 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:26:50.070302 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:26:50.070314 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:26:50.070326 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:26:50.070339 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:26:50.070352 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:26:50.070370 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 17:26:50.070384 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 17:26:50.070399 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 17:26:50.070414 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 4 17:26:50.070427 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:26:50.070440 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:26:50.070460 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:26:50.070472 kernel: SELinux: Initializing. Sep 4 17:26:50.070485 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:26:50.072565 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:26:50.072580 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:26:50.072591 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:50.072602 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:50.072613 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:50.072621 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:26:50.072630 kernel: signal: max sigframe size: 3632 Sep 4 17:26:50.072640 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:26:50.072648 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:26:50.072656 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:26:50.072666 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:26:50.072674 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:26:50.072684 kernel: .... node #0, CPUs: #1 Sep 4 17:26:50.072692 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 4 17:26:50.072703 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:26:50.072711 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:26:50.072719 kernel: smpboot: Max logical packages: 1 Sep 4 17:26:50.072730 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 4 17:26:50.072737 kernel: devtmpfs: initialized Sep 4 17:26:50.072746 kernel: x86/mm: Memory block size: 128MB Sep 4 17:26:50.072758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 17:26:50.072766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:26:50.072777 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:26:50.072785 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:26:50.072792 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:26:50.072803 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:26:50.072811 kernel: audit: type=2000 audit(1725470809.028:1): state=initialized audit_enabled=0 res=1 Sep 4 17:26:50.072819 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:26:50.072829 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:26:50.072839 kernel: cpuidle: using governor menu Sep 4 17:26:50.072848 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:26:50.072857 kernel: dca service started, version 1.12.1 Sep 4 17:26:50.072867 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 4 17:26:50.072875 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:26:50.072883 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:26:50.072894 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:26:50.072902 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:26:50.072910 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:26:50.072921 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:26:50.072929 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:26:50.072939 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:26:50.072947 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:26:50.072955 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:26:50.072965 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:26:50.072973 kernel: ACPI: Interpreter enabled Sep 4 17:26:50.072981 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:26:50.072991 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:26:50.073001 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:26:50.073010 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:26:50.073019 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 17:26:50.073027 kernel: iommu: Default domain type: Translated Sep 4 17:26:50.073036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:26:50.073045 kernel: efivars: Registered efivars operations Sep 4 17:26:50.073053 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:26:50.073063 kernel: PCI: System does not support PCI Sep 4 17:26:50.073071 kernel: vgaarb: loaded Sep 4 17:26:50.073080 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 4 17:26:50.073091 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:26:50.073099 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:26:50.073107 kernel: pnp: PnP ACPI init Sep 4 17:26:50.073117 kernel: pnp: PnP ACPI: found 3 devices Sep 4 17:26:50.073125 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:26:50.073134 kernel: NET: Registered PF_INET protocol family Sep 4 17:26:50.073143 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:26:50.073151 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:26:50.073163 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:26:50.073171 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:26:50.073179 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:26:50.073189 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:26:50.073197 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:26:50.073205 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:26:50.073215 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:26:50.073223 kernel: NET: Registered PF_XDP protocol family Sep 4 17:26:50.073231 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:26:50.073243 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:26:50.073250 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 4 17:26:50.073259 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:26:50.073269 kernel: Initialise system trusted keyrings Sep 4 17:26:50.073277 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:26:50.073287 kernel: Key type asymmetric registered Sep 4 17:26:50.073295 kernel: Asymmetric key parser 'x509' registered Sep 4 17:26:50.073304 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:26:50.073313 kernel: io scheduler mq-deadline registered Sep 4 17:26:50.073322 kernel: io scheduler kyber registered Sep 4 17:26:50.073333 kernel: io scheduler bfq registered Sep 4 17:26:50.073340 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:26:50.073349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:26:50.073359 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:26:50.073367 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:26:50.073376 kernel: i8042: PNP: No PS/2 controller found. Sep 4 17:26:50.073505 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 17:26:50.073605 kernel: rtc_cmos 00:02: setting system clock to 2024-09-04T17:26:49 UTC (1725470809) Sep 4 17:26:50.073698 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 17:26:50.073709 kernel: intel_pstate: CPU model not supported Sep 4 17:26:50.073717 kernel: efifb: probing for efifb Sep 4 17:26:50.073725 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:26:50.073733 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:26:50.073741 kernel: efifb: scrolling: redraw Sep 4 17:26:50.073752 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:26:50.073762 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:26:50.073770 kernel: fb0: EFI VGA frame buffer device Sep 4 17:26:50.073778 kernel: pstore: Using crash dump compression: deflate Sep 4 17:26:50.073789 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:26:50.073802 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:26:50.073817 kernel: Segment Routing with IPv6 Sep 4 17:26:50.073837 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:26:50.073855 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:26:50.073873 kernel: Key type dns_resolver registered Sep 4 17:26:50.073890 kernel: IPI shorthand broadcast: enabled Sep 4 17:26:50.073914 kernel: sched_clock: Marking stable (854003000, 50264200)->(1125997400, -221730200) Sep 4 17:26:50.073931 kernel: registered taskstats version 1 Sep 4 17:26:50.073947 kernel: Loading compiled-in X.509 certificates Sep 4 17:26:50.073962 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:26:50.073976 kernel: Key type .fscrypt registered Sep 4 17:26:50.073991 kernel: Key type fscrypt-provisioning registered Sep 4 17:26:50.074007 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:26:50.074023 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:26:50.074051 kernel: ima: No architecture policies found Sep 4 17:26:50.074070 kernel: clk: Disabling unused clocks Sep 4 17:26:50.074090 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:26:50.074108 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:26:50.074124 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:26:50.074141 kernel: Run /init as init process Sep 4 17:26:50.074156 kernel: with arguments: Sep 4 17:26:50.074177 kernel: /init Sep 4 17:26:50.074192 kernel: with environment: Sep 4 17:26:50.074210 kernel: HOME=/ Sep 4 17:26:50.074224 kernel: TERM=linux Sep 4 17:26:50.074240 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:26:50.074258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:26:50.074276 systemd[1]: Detected virtualization microsoft. Sep 4 17:26:50.074294 systemd[1]: Detected architecture x86-64. Sep 4 17:26:50.074310 systemd[1]: Running in initrd. Sep 4 17:26:50.074330 systemd[1]: No hostname configured, using default hostname. Sep 4 17:26:50.074350 systemd[1]: Hostname set to . Sep 4 17:26:50.074368 systemd[1]: Initializing machine ID from random generator. Sep 4 17:26:50.074387 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:26:50.074406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:26:50.074422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:26:50.074442 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:26:50.074461 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:26:50.074479 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:26:50.074499 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:26:50.074520 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:26:50.076563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:26:50.076588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:26:50.076605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:26:50.076619 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:26:50.076634 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:26:50.076652 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:26:50.076667 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:26:50.076681 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:26:50.076695 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:26:50.076710 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:26:50.076725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:26:50.076739 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:26:50.076754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:26:50.076771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:26:50.076786 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:26:50.076800 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:26:50.076814 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:26:50.076829 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:26:50.076843 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:26:50.076857 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:26:50.076872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:26:50.076909 systemd-journald[176]: Collecting audit messages is disabled. Sep 4 17:26:50.076943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:50.076958 systemd-journald[176]: Journal started Sep 4 17:26:50.076990 systemd-journald[176]: Runtime Journal (/run/log/journal/cdcf31ce63d44806837ad5e55c1b7c7b) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:26:50.084643 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:26:50.084992 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:26:50.086167 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:26:50.086596 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:26:50.107957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:26:50.115671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:26:50.124106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:50.129991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:26:50.139715 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:50.146710 systemd-modules-load[177]: Inserted module 'overlay' Sep 4 17:26:50.157080 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:26:50.163743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:26:50.177065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:26:50.199736 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:50.211167 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:26:50.211202 kernel: Bridge firewalling registered Sep 4 17:26:50.211104 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 4 17:26:50.211901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:26:50.222681 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:26:50.231694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:26:50.237515 dracut-cmdline[207]: dracut-dracut-053 Sep 4 17:26:50.237515 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:50.267415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:26:50.280054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:26:50.324021 systemd-resolved[257]: Positive Trust Anchors: Sep 4 17:26:50.325386 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:26:50.325428 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:26:50.329656 systemd-resolved[257]: Defaulting to hostname 'linux'. Sep 4 17:26:50.330610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:26:50.355764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:26:50.370558 kernel: SCSI subsystem initialized Sep 4 17:26:50.381550 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:26:50.393558 kernel: iscsi: registered transport (tcp) Sep 4 17:26:50.419067 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:26:50.419116 kernel: QLogic iSCSI HBA Driver Sep 4 17:26:50.453851 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:26:50.463710 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:26:50.494078 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:26:50.494143 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:26:50.498289 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:26:50.542557 kernel: raid6: avx512x4 gen() 18084 MB/s Sep 4 17:26:50.561544 kernel: raid6: avx512x2 gen() 18312 MB/s Sep 4 17:26:50.580545 kernel: raid6: avx512x1 gen() 18364 MB/s Sep 4 17:26:50.600549 kernel: raid6: avx2x4 gen() 18294 MB/s Sep 4 17:26:50.619544 kernel: raid6: avx2x2 gen() 18254 MB/s Sep 4 17:26:50.639802 kernel: raid6: avx2x1 gen() 13895 MB/s Sep 4 17:26:50.639834 kernel: raid6: using algorithm avx512x1 gen() 18364 MB/s Sep 4 17:26:50.661544 kernel: raid6: .... xor() 25850 MB/s, rmw enabled Sep 4 17:26:50.661575 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:26:50.688559 kernel: xor: automatically using best checksumming function avx Sep 4 17:26:50.849562 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:26:50.858691 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:26:50.869927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:26:50.881209 systemd-udevd[395]: Using default interface naming scheme 'v255'. Sep 4 17:26:50.885452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:26:50.905200 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:26:50.916314 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 4 17:26:50.942967 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:26:50.955989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:26:50.998350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:26:51.013707 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:26:51.040110 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:26:51.049810 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:26:51.057202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:26:51.060867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:26:51.076912 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:26:51.091556 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:26:51.105187 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:26:51.124104 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:26:51.124157 kernel: AES CTR mode by8 optimization enabled Sep 4 17:26:51.135993 kernel: hv_vmbus: Vmbus version:5.2 Sep 4 17:26:51.131463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:26:51.131700 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:51.142042 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:51.148662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:26:51.148920 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:51.152142 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:51.180759 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:26:51.180798 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:26:51.181552 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:26:51.186390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:51.192830 kernel: scsi host0: storvsc_host_t Sep 4 17:26:51.193026 kernel: scsi host1: storvsc_host_t Sep 4 17:26:51.200036 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:26:51.201517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:26:51.208710 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:26:51.202819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:51.220775 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:26:51.220816 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:26:51.229800 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:51.234026 kernel: PTP clock support registered Sep 4 17:26:51.255551 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:26:51.257810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:51.281123 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:51.878840 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:26:51.878873 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:26:51.878884 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:26:51.878898 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:26:51.878908 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:26:51.878921 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:26:51.868295 systemd-resolved[257]: Clock change detected. Flushing caches. Sep 4 17:26:51.890773 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Sep 4 17:26:51.891062 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:26:51.895874 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:26:51.902086 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:26:51.901435 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:51.911292 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:26:51.911471 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:26:51.924816 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: VF slot 1 added Sep 4 17:26:51.925144 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: VF slot 1 removed Sep 4 17:26:51.934874 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:26:51.949910 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:26:51.950212 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:26:51.957606 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 4 17:26:51.957881 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:26:51.958106 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:26:51.965417 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:26:51.965447 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 4 17:26:53.098752 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:26:53.144883 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Sep 4 17:26:53.160430 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:26:53.172506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:26:53.290873 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (449) Sep 4 17:26:53.304708 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:26:53.308461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:26:53.327022 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:26:53.339905 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:26:53.347867 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:26:53.482889 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: VF slot 1 added Sep 4 17:26:53.491764 kernel: hv_pci 21a323a9-527d-457c-9c89-74fa2763ddc9: PCI VMBus probing: Using version 0x10004 Sep 4 17:26:53.491932 kernel: hv_pci 21a323a9-527d-457c-9c89-74fa2763ddc9: PCI host bridge to bus 527d:00 Sep 4 17:26:53.498899 kernel: pci_bus 527d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 4 17:26:53.499056 kernel: pci_bus 527d:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:26:53.516868 kernel: pci 527d:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 4 17:26:53.516927 kernel: pci 527d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:26:53.516958 kernel: pci 527d:00:02.0: enabling Extended Tags Sep 4 17:26:53.531452 kernel: pci 527d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 527d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 4 17:26:53.543032 kernel: pci_bus 527d:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:26:53.543217 kernel: pci 527d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 4 17:26:53.746672 kernel: mlx5_core 527d:00:02.0: enabling device (0000 -> 0002) Sep 4 17:26:53.751866 kernel: mlx5_core 527d:00:02.0: firmware version: 14.30.1284 Sep 4 17:26:53.976463 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: VF registering: eth1 Sep 4 17:26:53.976774 kernel: mlx5_core 527d:00:02.0 eth1: joined to eth0 Sep 4 17:26:53.977970 kernel: mlx5_core 527d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 4 17:26:53.994910 kernel: mlx5_core 527d:00:02.0 enP21117s1: renamed from eth1 Sep 4 17:26:54.355914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:26:54.356383 disk-uuid[593]: The operation has completed successfully. Sep 4 17:26:54.430373 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:26:54.430476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:26:54.456247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:26:54.462484 sh[688]: Success Sep 4 17:26:54.511869 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:26:54.936904 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:26:54.954758 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:26:54.960386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:26:54.975451 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:26:54.975506 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:54.979141 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:26:54.981995 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:26:54.984436 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:26:55.739363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:26:55.743219 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:26:55.753374 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:26:55.760036 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:26:55.775581 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:55.775628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:55.778395 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:26:55.836507 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:26:55.851864 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:26:55.852492 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:26:55.866493 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:26:55.873067 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:55.877381 systemd-networkd[862]: lo: Link UP Sep 4 17:26:55.877400 systemd-networkd[862]: lo: Gained carrier Sep 4 17:26:55.880351 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:26:55.882756 systemd-networkd[862]: Enumeration completed Sep 4 17:26:55.883611 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:26:55.884368 systemd-networkd[862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:55.884371 systemd-networkd[862]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:26:55.888457 systemd[1]: Reached target network.target - Network. Sep 4 17:26:55.914005 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:26:55.945865 kernel: mlx5_core 527d:00:02.0 enP21117s1: Link up Sep 4 17:26:55.982864 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: Data path switched to VF: enP21117s1 Sep 4 17:26:55.983144 systemd-networkd[862]: enP21117s1: Link UP Sep 4 17:26:55.983274 systemd-networkd[862]: eth0: Link UP Sep 4 17:26:55.983420 systemd-networkd[862]: eth0: Gained carrier Sep 4 17:26:55.983430 systemd-networkd[862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:55.994885 systemd-networkd[862]: enP21117s1: Gained carrier Sep 4 17:26:56.024952 systemd-networkd[862]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:26:57.043203 systemd-networkd[862]: eth0: Gained IPv6LL Sep 4 17:26:57.531001 ignition[873]: Ignition 2.18.0 Sep 4 17:26:57.531011 ignition[873]: Stage: fetch-offline Sep 4 17:26:57.531055 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:57.531066 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:26:57.531232 ignition[873]: parsed url from cmdline: "" Sep 4 17:26:57.531237 ignition[873]: no config URL provided Sep 4 17:26:57.531245 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:26:57.531256 ignition[873]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:26:57.531267 ignition[873]: failed to fetch config: resource requires networking Sep 4 17:26:57.531447 ignition[873]: Ignition finished successfully Sep 4 17:26:57.551721 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:26:57.563195 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:26:57.580937 ignition[881]: Ignition 2.18.0 Sep 4 17:26:57.580946 ignition[881]: Stage: fetch Sep 4 17:26:57.581155 ignition[881]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:57.581165 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:26:57.581246 ignition[881]: parsed url from cmdline: "" Sep 4 17:26:57.581251 ignition[881]: no config URL provided Sep 4 17:26:57.581257 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:26:57.581265 ignition[881]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:26:57.581299 ignition[881]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:26:57.670155 ignition[881]: GET result: OK Sep 4 17:26:57.670358 ignition[881]: config has been read from IMDS userdata Sep 4 17:26:57.672036 ignition[881]: parsing config with SHA512: c5e8186fc376dfe47d89f02bf24427a7f6b58c5103f6f9fb59c3e79333ae3a24488aac155d6cb5db58b688b14ef5106d246f597e2e36dcfe012fe729e59bab67 Sep 4 17:26:57.677778 unknown[881]: fetched base config from "system" Sep 4 17:26:57.677804 unknown[881]: fetched base config from "system" Sep 4 17:26:57.679891 ignition[881]: fetch: fetch complete Sep 4 17:26:57.677821 unknown[881]: fetched user config from "azure" Sep 4 17:26:57.679899 ignition[881]: fetch: fetch passed Sep 4 17:26:57.685372 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:26:57.679956 ignition[881]: Ignition finished successfully Sep 4 17:26:57.696572 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:26:57.711599 ignition[888]: Ignition 2.18.0 Sep 4 17:26:57.711623 ignition[888]: Stage: kargs Sep 4 17:26:57.711818 ignition[888]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:57.711828 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:26:57.714192 ignition[888]: kargs: kargs passed Sep 4 17:26:57.717550 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:26:57.714230 ignition[888]: Ignition finished successfully Sep 4 17:26:57.730037 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:26:57.744605 ignition[895]: Ignition 2.18.0 Sep 4 17:26:57.744615 ignition[895]: Stage: disks Sep 4 17:26:57.744838 ignition[895]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:57.744865 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:26:57.748412 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:26:57.745699 ignition[895]: disks: disks passed Sep 4 17:26:57.745737 ignition[895]: Ignition finished successfully Sep 4 17:26:57.762279 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:26:57.765396 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:26:57.774305 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:26:57.777039 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:26:57.782344 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:26:57.796319 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:26:57.811966 systemd-networkd[862]: enP21117s1: Gained IPv6LL Sep 4 17:26:57.876601 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:26:57.885779 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:26:57.897939 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:26:57.997727 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:26:58.004985 kernel: EXT4-fs (sda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:26:58.000570 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:26:58.081930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:26:58.088186 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:26:58.090893 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:26:58.098977 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:26:58.100504 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:26:58.115017 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:26:58.123170 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (915) Sep 4 17:26:58.131278 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:58.131324 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:58.133327 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:26:58.133560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:26:58.140259 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:26:58.144153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:26:59.041028 coreos-metadata[917]: Sep 04 17:26:59.040 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:26:59.047862 coreos-metadata[917]: Sep 04 17:26:59.047 INFO Fetch successful Sep 4 17:26:59.051623 coreos-metadata[917]: Sep 04 17:26:59.051 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:26:59.068176 coreos-metadata[917]: Sep 04 17:26:59.068 INFO Fetch successful Sep 4 17:26:59.076114 coreos-metadata[917]: Sep 04 17:26:59.076 INFO wrote hostname ci-3975.2.1-a-1f7e34d344 to /sysroot/etc/hostname Sep 4 17:26:59.077579 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:26:59.292411 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:26:59.332464 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:26:59.339833 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:26:59.346689 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:27:00.402381 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:27:00.413096 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:27:00.421986 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:27:00.427497 kernel: BTRFS info (device sda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:27:00.430982 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:27:00.455160 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:27:00.460894 ignition[1037]: INFO : Ignition 2.18.0 Sep 4 17:27:00.460894 ignition[1037]: INFO : Stage: mount Sep 4 17:27:00.468766 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:27:00.468766 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:27:00.468766 ignition[1037]: INFO : mount: mount passed Sep 4 17:27:00.468766 ignition[1037]: INFO : Ignition finished successfully Sep 4 17:27:00.464603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:27:00.481451 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:27:00.489209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:27:00.505861 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1050) Sep 4 17:27:00.505898 kernel: BTRFS info (device sda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:27:00.509866 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:27:00.514609 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:27:00.519864 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:27:00.520933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:27:00.546622 ignition[1066]: INFO : Ignition 2.18.0 Sep 4 17:27:00.546622 ignition[1066]: INFO : Stage: files Sep 4 17:27:00.551610 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:27:00.551610 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:27:00.551610 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:27:00.551610 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:27:00.551610 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:27:00.823004 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:27:00.828471 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:27:00.832728 unknown[1066]: wrote ssh authorized keys file for user: core Sep 4 17:27:00.835386 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:27:00.835386 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:27:00.835386 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:27:00.942310 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:27:01.037427 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:27:01.044106 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:27:01.044106 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:27:01.044106 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:27:01.058280 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:27:01.058280 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:27:01.067734 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:27:01.072370 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:27:01.077238 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:27:01.515384 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:27:01.845468 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:27:01.845468 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:27:01.860256 ignition[1066]: INFO : files: files passed Sep 4 17:27:01.860256 ignition[1066]: INFO : Ignition finished successfully Sep 4 17:27:01.853603 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:27:01.878729 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:27:01.883010 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:27:01.894259 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:27:01.918611 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:27:01.918611 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:27:01.894340 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:27:01.920202 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:27:01.908654 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:27:01.917159 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:27:01.951991 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:27:01.980191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:27:01.980293 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:27:01.986786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:27:01.995750 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:27:01.998665 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:27:02.008032 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:27:02.022194 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:27:02.037969 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:27:02.049126 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:27:02.050801 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:27:02.051693 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:27:02.052094 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:27:02.052219 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:27:02.052992 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:27:02.053441 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:27:02.053857 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:27:02.054293 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:27:02.055449 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:27:02.055844 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:27:02.056240 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:27:02.056669 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:27:02.057119 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:27:02.057507 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:27:02.057909 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:27:02.058032 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:27:02.058788 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:27:02.059678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:27:02.060061 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:27:02.097535 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:27:02.103416 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:27:02.103548 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:27:02.109626 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:27:02.109763 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:27:02.189099 ignition[1120]: INFO : Ignition 2.18.0 Sep 4 17:27:02.189099 ignition[1120]: INFO : Stage: umount Sep 4 17:27:02.189099 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:27:02.189099 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:27:02.189099 ignition[1120]: INFO : umount: umount passed Sep 4 17:27:02.189099 ignition[1120]: INFO : Ignition finished successfully Sep 4 17:27:02.120636 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:27:02.120747 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:27:02.123482 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:27:02.123605 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:27:02.157152 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:27:02.159742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:27:02.159933 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:27:02.177454 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:27:02.192248 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:27:02.192368 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:27:02.192782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:27:02.192893 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:27:02.195616 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:27:02.195695 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:27:02.231585 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:27:02.231671 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:27:02.236998 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:27:02.237046 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:27:02.237513 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:27:02.237546 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:27:02.240316 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:27:02.240352 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:27:02.240747 systemd[1]: Stopped target network.target - Network. Sep 4 17:27:02.241182 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:27:02.241215 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:27:02.241656 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:27:02.243920 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:27:02.299166 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:27:02.309032 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:27:02.313806 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:27:02.318710 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:27:02.318767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:27:02.323603 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:27:02.323645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:27:02.328278 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:27:02.328336 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:27:02.333812 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:27:02.333866 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:27:02.339387 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:27:02.346729 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:27:02.355192 systemd-networkd[862]: eth0: DHCPv6 lease lost Sep 4 17:27:02.363328 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:27:02.366135 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:27:02.368581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:27:02.374841 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:27:02.374951 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:27:02.389929 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:27:02.394965 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:27:02.395024 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:27:02.398450 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:27:02.399572 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:27:02.400245 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:27:02.409316 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:27:02.409406 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:27:02.417195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:27:02.417617 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:27:02.419160 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:27:02.419203 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:27:02.440824 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:27:02.440964 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:27:02.447923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:27:02.447998 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:27:02.467777 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:27:02.476751 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: Data path switched from VF: enP21117s1 Sep 4 17:27:02.467819 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:27:02.476746 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:27:02.476806 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:27:02.487331 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:27:02.487392 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:27:02.493523 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:27:02.493569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:27:02.507357 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:27:02.510489 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:27:02.510541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:27:02.516928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:27:02.516966 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:27:02.520519 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:27:02.520612 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:27:02.527714 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:27:02.527791 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:27:02.843204 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:27:02.843359 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:27:02.848900 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:27:02.856793 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:27:02.856869 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:27:02.869024 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:27:03.296638 systemd[1]: Switching root. Sep 4 17:27:03.349995 systemd-journald[176]: Journal stopped Sep 4 17:27:10.206688 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Sep 4 17:27:10.206727 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:27:10.206745 kernel: SELinux: policy capability open_perms=1 Sep 4 17:27:10.206759 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:27:10.206772 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:27:10.206786 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:27:10.206802 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:27:10.206819 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:27:10.206833 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:27:10.213432 kernel: audit: type=1403 audit(1725470823.742:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:27:10.213463 systemd[1]: Successfully loaded SELinux policy in 78.187ms. Sep 4 17:27:10.213482 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.286ms. Sep 4 17:27:10.213499 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:27:10.213515 systemd[1]: Detected virtualization microsoft. Sep 4 17:27:10.213538 systemd[1]: Detected architecture x86-64. Sep 4 17:27:10.213554 systemd[1]: Detected first boot. Sep 4 17:27:10.213573 systemd[1]: Hostname set to . Sep 4 17:27:10.213589 systemd[1]: Initializing machine ID from random generator. Sep 4 17:27:10.213621 zram_generator::config[1164]: No configuration found. Sep 4 17:27:10.213641 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:27:10.213668 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:27:10.213683 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:27:10.213699 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:27:10.213716 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:27:10.213731 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:27:10.213748 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:27:10.213768 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:27:10.213784 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:27:10.213801 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:27:10.213829 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:27:10.213844 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:27:10.213867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:27:10.213883 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:27:10.213898 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:27:10.213916 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:27:10.213932 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:27:10.213947 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:27:10.213963 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:27:10.213979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:27:10.213994 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:27:10.214014 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:27:10.214030 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:27:10.214049 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:27:10.214064 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:27:10.214080 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:27:10.214096 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:27:10.214112 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:27:10.214128 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:27:10.214144 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:27:10.214162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:27:10.214178 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:27:10.214195 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:27:10.214211 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:27:10.214227 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:27:10.214246 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:27:10.214262 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:27:10.214279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:27:10.214295 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:27:10.214312 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:27:10.214328 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:27:10.214344 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:27:10.214361 systemd[1]: Reached target machines.target - Containers. Sep 4 17:27:10.214379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:27:10.214396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:27:10.214412 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:27:10.214428 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:27:10.214444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:27:10.214460 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:27:10.214477 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:27:10.214493 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:27:10.214509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:27:10.214528 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:27:10.214544 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:27:10.214560 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:27:10.214576 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:27:10.214592 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:27:10.214608 kernel: loop: module loaded Sep 4 17:27:10.214622 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:27:10.214638 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:27:10.214658 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:27:10.214674 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:27:10.214690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:27:10.214706 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:27:10.214722 systemd[1]: Stopped verity-setup.service. Sep 4 17:27:10.214736 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:27:10.214752 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:27:10.214768 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:27:10.214811 systemd-journald[1269]: Collecting audit messages is disabled. Sep 4 17:27:10.215932 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:27:10.215952 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:27:10.215980 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:27:10.215993 kernel: ACPI: bus type drm_connector registered Sep 4 17:27:10.216009 systemd-journald[1269]: Journal started Sep 4 17:27:10.216034 systemd-journald[1269]: Runtime Journal (/run/log/journal/8dbac6b54c22423ab86262563a1f77de) is 8.0M, max 158.8M, 150.8M free. Sep 4 17:27:08.940184 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:27:09.488929 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 17:27:09.489308 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:27:10.222329 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:27:10.229530 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:27:10.232933 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:27:10.236765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:27:10.245596 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:27:10.245814 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:27:10.249639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:27:10.249824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:27:10.253520 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:27:10.253724 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:27:10.258159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:27:10.258352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:27:10.262321 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:27:10.263017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:27:10.267646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:27:10.273908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:27:10.277913 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:27:10.287265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:27:10.295869 kernel: fuse: init (API version 7.39) Sep 4 17:27:10.296677 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:27:10.296829 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:27:10.304764 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:27:10.311941 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:27:10.316071 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:27:10.319290 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:27:10.319335 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:27:10.323160 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:27:10.327437 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:27:10.337706 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:27:10.340686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:27:10.362989 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:27:10.367142 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:27:10.370473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:27:10.372737 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:27:10.376459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:27:10.379085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:27:10.388011 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:27:10.395389 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:27:10.402025 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:27:10.408549 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:27:10.412222 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:27:10.415789 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:27:10.429312 udevadm[1303]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:27:10.441989 kernel: loop0: detected capacity change from 0 to 209816 Sep 4 17:27:10.451867 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:27:10.452101 systemd-journald[1269]: Time spent on flushing to /var/log/journal/8dbac6b54c22423ab86262563a1f77de is 15.128ms for 965 entries. Sep 4 17:27:10.452101 systemd-journald[1269]: System Journal (/var/log/journal/8dbac6b54c22423ab86262563a1f77de) is 8.0M, max 2.6G, 2.6G free. Sep 4 17:27:10.491233 systemd-journald[1269]: Received client request to flush runtime journal. Sep 4 17:27:10.449297 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:27:10.455777 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:27:10.468043 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:27:10.493951 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:27:10.539722 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:27:10.540903 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:27:10.548870 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:27:10.581872 kernel: loop1: detected capacity change from 0 to 139904 Sep 4 17:27:10.598078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:27:10.828726 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:27:10.839076 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:27:10.867244 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Sep 4 17:27:10.867599 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Sep 4 17:27:10.872638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:27:11.379883 kernel: loop2: detected capacity change from 0 to 80568 Sep 4 17:27:11.706881 kernel: loop3: detected capacity change from 0 to 56904 Sep 4 17:27:11.813874 kernel: loop4: detected capacity change from 0 to 209816 Sep 4 17:27:11.821868 kernel: loop5: detected capacity change from 0 to 139904 Sep 4 17:27:11.838875 kernel: loop6: detected capacity change from 0 to 80568 Sep 4 17:27:11.852868 kernel: loop7: detected capacity change from 0 to 56904 Sep 4 17:27:11.857335 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 17:27:11.857817 (sd-merge)[1325]: Merged extensions into '/usr'. Sep 4 17:27:11.860842 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:27:11.860878 systemd[1]: Reloading... Sep 4 17:27:11.909897 zram_generator::config[1346]: No configuration found. Sep 4 17:27:12.066637 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:27:12.144228 systemd[1]: Reloading finished in 282 ms. Sep 4 17:27:12.167005 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:27:12.176013 systemd[1]: Starting ensure-sysext.service... Sep 4 17:27:12.180668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:27:12.195662 systemd[1]: Reloading requested from client PID 1407 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:27:12.195680 systemd[1]: Reloading... Sep 4 17:27:12.248211 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:27:12.249731 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:27:12.256386 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:27:12.256642 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Sep 4 17:27:12.256702 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Sep 4 17:27:12.260002 zram_generator::config[1431]: No configuration found. Sep 4 17:27:12.276559 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:27:12.276570 systemd-tmpfiles[1408]: Skipping /boot Sep 4 17:27:12.286812 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:27:12.286830 systemd-tmpfiles[1408]: Skipping /boot Sep 4 17:27:12.443928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:27:12.519151 systemd[1]: Reloading finished in 322 ms. Sep 4 17:27:12.543553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:27:12.560108 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:27:12.566980 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:27:12.579257 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:27:12.588027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:27:12.594981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:27:12.607803 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:27:12.609150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:27:12.614721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:27:12.622127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:27:12.635124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:27:12.640370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:27:12.641993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:27:12.648123 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:27:12.656070 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:27:12.662709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:27:12.662906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:27:12.669042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:27:12.669223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:27:12.673353 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:27:12.673511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:27:12.691443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:27:12.691838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:27:12.698997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:27:12.707946 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:27:12.714948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:27:12.719416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:27:12.719578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:27:12.722637 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:27:12.729233 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:27:12.734581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:27:12.736171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:27:12.746416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:27:12.747076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:27:12.751155 augenrules[1523]: No rules Sep 4 17:27:12.752425 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:27:12.756347 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:27:12.756512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:27:12.781744 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 4 17:27:12.787379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:27:12.787714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:27:12.796941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:27:12.805430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:27:12.812720 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:27:12.820097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:27:12.823205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:27:12.823432 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:27:12.828245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:27:12.829530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:27:12.829734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:27:12.833639 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:27:12.834825 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:27:12.838386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:27:12.839099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:27:12.845930 systemd[1]: Finished ensure-sysext.service. Sep 4 17:27:12.853317 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:27:12.853522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:27:12.861201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:27:12.861324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:27:12.866281 systemd-resolved[1504]: Positive Trust Anchors: Sep 4 17:27:12.866298 systemd-resolved[1504]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:27:12.866356 systemd-resolved[1504]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:27:12.964674 systemd-resolved[1504]: Using system hostname 'ci-3975.2.1-a-1f7e34d344'. Sep 4 17:27:12.966963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:27:12.970767 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:27:12.989553 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:27:12.999243 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:27:13.021783 systemd-udevd[1550]: Using default interface naming scheme 'v255'. Sep 4 17:27:13.169618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:27:13.181662 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:27:13.260291 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:27:13.290888 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1561) Sep 4 17:27:13.348663 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:27:13.356985 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:27:13.366603 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 4 17:27:13.400103 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:27:13.401453 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:27:13.419999 kernel: hv_vmbus: registering driver hv_balloon Sep 4 17:27:13.423012 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 17:27:13.423055 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 17:27:13.428190 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 17:27:13.431354 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 17:27:13.432866 kernel: Console: switching to colour dummy device 80x25 Sep 4 17:27:13.438764 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:27:13.444701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:27:13.444931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:27:13.458145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:27:13.471186 systemd-networkd[1554]: lo: Link UP Sep 4 17:27:13.471194 systemd-networkd[1554]: lo: Gained carrier Sep 4 17:27:13.473408 systemd-networkd[1554]: Enumeration completed Sep 4 17:27:13.473500 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:27:13.473778 systemd-networkd[1554]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:27:13.473781 systemd-networkd[1554]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:27:13.477293 systemd[1]: Reached target network.target - Network. Sep 4 17:27:13.484992 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:27:13.613868 kernel: mlx5_core 527d:00:02.0 enP21117s1: Link up Sep 4 17:27:13.643885 kernel: hv_netvsc 000d3ab3-c13a-000d-3ab3-c13a000d3ab3 eth0: Data path switched to VF: enP21117s1 Sep 4 17:27:13.644375 systemd-networkd[1554]: enP21117s1: Link UP Sep 4 17:27:13.644516 systemd-networkd[1554]: eth0: Link UP Sep 4 17:27:13.644522 systemd-networkd[1554]: eth0: Gained carrier Sep 4 17:27:13.644544 systemd-networkd[1554]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:27:13.657286 systemd-networkd[1554]: enP21117s1: Gained carrier Sep 4 17:27:13.670066 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1564) Sep 4 17:27:13.734398 systemd-networkd[1554]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:27:13.771991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:27:13.786105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:27:13.812865 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 4 17:27:13.916264 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:27:13.931079 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:27:14.036288 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:27:14.058194 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:27:14.064287 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:27:14.066004 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:27:14.073375 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:27:14.079999 lvm[1641]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:27:14.111005 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:27:14.304298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:27:14.835293 systemd-networkd[1554]: enP21117s1: Gained IPv6LL Sep 4 17:27:14.963266 systemd-networkd[1554]: eth0: Gained IPv6LL Sep 4 17:27:14.966752 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:27:14.971999 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:27:21.508357 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:27:21.521611 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:27:21.535012 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:27:21.544616 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:27:21.548195 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:27:21.551049 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:27:21.554373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:27:21.557946 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:27:21.560877 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:27:21.564253 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:27:21.567735 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:27:21.567769 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:27:21.570189 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:27:21.573490 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:27:21.577756 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:27:21.584594 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:27:21.588168 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:27:21.591151 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:27:21.593689 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:27:21.596382 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:27:21.596414 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:27:21.604937 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 17:27:21.610003 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:27:21.618034 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:27:21.625721 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:27:21.630015 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:27:21.638035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:27:21.640952 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:27:21.648922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:21.652680 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:27:21.658577 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:27:21.669425 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:27:21.675047 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:27:21.680026 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:27:21.689061 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:27:21.691820 jq[1656]: false Sep 4 17:27:21.692281 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:27:21.694092 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:27:21.696012 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:27:21.708484 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:27:21.715300 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:27:21.715524 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:27:21.741875 jq[1668]: true Sep 4 17:27:21.751235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:27:21.751451 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:27:21.783700 extend-filesystems[1658]: Found loop4 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found loop5 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found loop6 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found loop7 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found sda Sep 4 17:27:21.788812 extend-filesystems[1658]: Found sda1 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found sda2 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found sda3 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found usr Sep 4 17:27:21.788812 extend-filesystems[1658]: Found sda4 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found sda6 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found sda7 Sep 4 17:27:21.788812 extend-filesystems[1658]: Found sda9 Sep 4 17:27:21.788812 extend-filesystems[1658]: Checking size of /dev/sda9 Sep 4 17:27:21.920165 extend-filesystems[1658]: Old size kept for /dev/sda9 Sep 4 17:27:21.920165 extend-filesystems[1658]: Found sr0 Sep 4 17:27:21.930377 update_engine[1667]: I0904 17:27:21.910701 1667 main.cc:92] Flatcar Update Engine starting Sep 4 17:27:21.793052 (chronyd)[1652]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 17:27:21.833496 chronyd[1700]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 17:27:21.813176 (ntainerd)[1690]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:27:21.849640 chronyd[1700]: Timezone right/UTC failed leap second check, ignoring Sep 4 17:27:21.813696 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:27:21.849832 chronyd[1700]: Loaded seccomp filter (level 2) Sep 4 17:27:21.935042 jq[1683]: true Sep 4 17:27:21.819619 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:27:21.819818 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:27:21.865163 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:27:21.865923 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:27:21.871602 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 17:27:21.944637 systemd-logind[1666]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:27:21.947974 systemd-logind[1666]: New seat seat0. Sep 4 17:27:21.950582 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:27:21.995308 bash[1725]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:27:21.997303 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:27:22.005615 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:27:22.022036 tar[1674]: linux-amd64/helm Sep 4 17:27:22.054496 sshd_keygen[1692]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:27:22.062362 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1733) Sep 4 17:27:22.059295 dbus-daemon[1655]: [system] SELinux support is enabled Sep 4 17:27:22.059462 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:27:22.069565 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:27:22.069607 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:27:22.073251 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:27:22.073278 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:27:22.091518 dbus-daemon[1655]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:27:22.094468 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:27:22.104155 update_engine[1667]: I0904 17:27:22.100041 1667 update_check_scheduler.cc:74] Next update check in 3m22s Sep 4 17:27:22.118223 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:27:22.210605 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:27:22.224185 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:27:22.230507 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 17:27:22.239184 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:27:22.239605 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:27:22.252896 coreos-metadata[1654]: Sep 04 17:27:22.252 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:27:22.258686 coreos-metadata[1654]: Sep 04 17:27:22.255 INFO Fetch successful Sep 4 17:27:22.258686 coreos-metadata[1654]: Sep 04 17:27:22.255 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 17:27:22.254155 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:27:22.262910 coreos-metadata[1654]: Sep 04 17:27:22.261 INFO Fetch successful Sep 4 17:27:22.262910 coreos-metadata[1654]: Sep 04 17:27:22.261 INFO Fetching http://168.63.129.16/machine/af23304c-a04a-4083-b9c6-f967f17a0e4b/9ebbefb5%2D6588%2D433e%2Dbbbe%2D477a5dd5bfa0.%5Fci%2D3975.2.1%2Da%2D1f7e34d344?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 17:27:22.264891 coreos-metadata[1654]: Sep 04 17:27:22.264 INFO Fetch successful Sep 4 17:27:22.266055 coreos-metadata[1654]: Sep 04 17:27:22.265 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:27:22.283503 coreos-metadata[1654]: Sep 04 17:27:22.281 INFO Fetch successful Sep 4 17:27:22.318572 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 17:27:22.330866 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:27:22.348181 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:27:22.363884 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:27:22.367311 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:27:22.372407 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:27:22.378752 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:27:22.383614 locksmithd[1763]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:27:22.709222 tar[1674]: linux-amd64/LICENSE Sep 4 17:27:22.709418 tar[1674]: linux-amd64/README.md Sep 4 17:27:22.721151 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:27:23.249530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:23.255284 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:27:23.666598 containerd[1690]: time="2024-09-04T17:27:23.666137200Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:27:23.699408 containerd[1690]: time="2024-09-04T17:27:23.699372300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:27:23.699504 containerd[1690]: time="2024-09-04T17:27:23.699411000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.700730000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.700755900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.700956500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.700991300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.701071100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.701110500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.701120400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.701176600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.701330500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.701343700Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:27:23.701651 containerd[1690]: time="2024-09-04T17:27:23.701351600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701935 containerd[1690]: time="2024-09-04T17:27:23.701471700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:27:23.701935 containerd[1690]: time="2024-09-04T17:27:23.701485300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:27:23.701935 containerd[1690]: time="2024-09-04T17:27:23.701538400Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:27:23.701935 containerd[1690]: time="2024-09-04T17:27:23.701549200Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.766972900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767016900Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767036500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767085200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767103400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767116400Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767130700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767259500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767277400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767294200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:27:23.767329 containerd[1690]: time="2024-09-04T17:27:23.767311200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:27:23.767715 containerd[1690]: time="2024-09-04T17:27:23.767393800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:27:23.767715 containerd[1690]: time="2024-09-04T17:27:23.767447100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:27:23.767715 containerd[1690]: time="2024-09-04T17:27:23.767469700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:27:23.767715 containerd[1690]: time="2024-09-04T17:27:23.767487100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:27:23.767715 containerd[1690]: time="2024-09-04T17:27:23.767518200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:27:23.767715 containerd[1690]: time="2024-09-04T17:27:23.767536300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:27:23.767715 containerd[1690]: time="2024-09-04T17:27:23.767552900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:27:23.767715 containerd[1690]: time="2024-09-04T17:27:23.767568000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:27:23.767972 containerd[1690]: time="2024-09-04T17:27:23.767724300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768164700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768223000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768242500Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768270200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768344100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768360800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768421200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768438400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768454600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768470900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768498200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768513400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.768554 containerd[1690]: time="2024-09-04T17:27:23.768530500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:27:23.769051 containerd[1690]: time="2024-09-04T17:27:23.768731300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.769051 containerd[1690]: time="2024-09-04T17:27:23.768755800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.769051 containerd[1690]: time="2024-09-04T17:27:23.768772700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.770441 containerd[1690]: time="2024-09-04T17:27:23.769342800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.770441 containerd[1690]: time="2024-09-04T17:27:23.769384900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.770441 containerd[1690]: time="2024-09-04T17:27:23.769410600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.770441 containerd[1690]: time="2024-09-04T17:27:23.769432900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.770441 containerd[1690]: time="2024-09-04T17:27:23.769452900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:27:23.770714 containerd[1690]: time="2024-09-04T17:27:23.769821100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:27:23.770714 containerd[1690]: time="2024-09-04T17:27:23.769943600Z" level=info msg="Connect containerd service" Sep 4 17:27:23.770714 containerd[1690]: time="2024-09-04T17:27:23.769995900Z" level=info msg="using legacy CRI server" Sep 4 17:27:23.770714 containerd[1690]: time="2024-09-04T17:27:23.770005800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:27:23.771293 containerd[1690]: time="2024-09-04T17:27:23.770903800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:27:23.771992 containerd[1690]: time="2024-09-04T17:27:23.771963600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:27:23.772211 containerd[1690]: time="2024-09-04T17:27:23.772100300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:27:23.772211 containerd[1690]: time="2024-09-04T17:27:23.772127900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:27:23.772211 containerd[1690]: time="2024-09-04T17:27:23.772163600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:27:23.772211 containerd[1690]: time="2024-09-04T17:27:23.772182600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:27:23.772582 containerd[1690]: time="2024-09-04T17:27:23.772396700Z" level=info msg="Start subscribing containerd event" Sep 4 17:27:23.772582 containerd[1690]: time="2024-09-04T17:27:23.772556600Z" level=info msg="Start recovering state" Sep 4 17:27:23.773121 containerd[1690]: time="2024-09-04T17:27:23.772716900Z" level=info msg="Start event monitor" Sep 4 17:27:23.773121 containerd[1690]: time="2024-09-04T17:27:23.772748200Z" level=info msg="Start snapshots syncer" Sep 4 17:27:23.773121 containerd[1690]: time="2024-09-04T17:27:23.772767600Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:27:23.773121 containerd[1690]: time="2024-09-04T17:27:23.772778900Z" level=info msg="Start streaming server" Sep 4 17:27:23.773121 containerd[1690]: time="2024-09-04T17:27:23.772836200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:27:23.773121 containerd[1690]: time="2024-09-04T17:27:23.772903400Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:27:23.773121 containerd[1690]: time="2024-09-04T17:27:23.772978500Z" level=info msg="containerd successfully booted in 0.108214s" Sep 4 17:27:23.773073 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:27:23.780556 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:27:23.785415 systemd[1]: Startup finished in 497ms (firmware) + 1min 8.831s (loader) + 994ms (kernel) + 13.388s (initrd) + 20.119s (userspace) = 1min 43.831s. Sep 4 17:27:24.132664 waagent[1788]: 2024-09-04T17:27:24.132507Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 17:27:24.136398 waagent[1788]: 2024-09-04T17:27:24.136324Z INFO Daemon Daemon OS: flatcar 3975.2.1 Sep 4 17:27:24.139379 waagent[1788]: 2024-09-04T17:27:24.138796Z INFO Daemon Daemon Python: 3.11.9 Sep 4 17:27:24.144162 waagent[1788]: 2024-09-04T17:27:24.142125Z INFO Daemon Daemon Run daemon Sep 4 17:27:24.144162 waagent[1788]: 2024-09-04T17:27:24.143342Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.2.1' Sep 4 17:27:24.144162 waagent[1788]: 2024-09-04T17:27:24.143719Z INFO Daemon Daemon Using waagent for provisioning Sep 4 17:27:24.144832 waagent[1788]: 2024-09-04T17:27:24.144794Z INFO Daemon Daemon Activate resource disk Sep 4 17:27:24.145166 waagent[1788]: 2024-09-04T17:27:24.145131Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 17:27:24.149055 waagent[1788]: 2024-09-04T17:27:24.148999Z INFO Daemon Daemon Found device: None Sep 4 17:27:24.149465 waagent[1788]: 2024-09-04T17:27:24.149429Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 17:27:24.150364 waagent[1788]: 2024-09-04T17:27:24.150328Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 17:27:24.152121 waagent[1788]: 2024-09-04T17:27:24.152077Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:27:24.153240 waagent[1788]: 2024-09-04T17:27:24.153201Z INFO Daemon Daemon Running default provisioning handler Sep 4 17:27:24.178985 waagent[1788]: 2024-09-04T17:27:24.178750Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 17:27:24.186139 waagent[1788]: 2024-09-04T17:27:24.186086Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 17:27:24.187299 waagent[1788]: 2024-09-04T17:27:24.187240Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 17:27:24.188342 waagent[1788]: 2024-09-04T17:27:24.188299Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 17:27:24.195336 kubelet[1807]: E0904 17:27:24.195271 1807 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:27:24.199077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:27:24.199192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:27:24.199452 systemd[1]: kubelet.service: Consumed 1.010s CPU time. Sep 4 17:27:24.201787 login[1792]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:27:24.203312 login[1797]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:27:24.228869 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:27:24.234111 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:27:24.236816 systemd-logind[1666]: New session 1 of user core. Sep 4 17:27:24.241719 systemd-logind[1666]: New session 2 of user core. Sep 4 17:27:24.247974 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:27:24.253149 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:27:24.257755 (systemd)[1836]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:27:24.306195 waagent[1788]: 2024-09-04T17:27:24.306103Z INFO Daemon Daemon Successfully mounted dvd Sep 4 17:27:24.412713 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 17:27:24.413805 waagent[1788]: 2024-09-04T17:27:24.413540Z INFO Daemon Daemon Detect protocol endpoint Sep 4 17:27:24.426431 waagent[1788]: 2024-09-04T17:27:24.415003Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:27:24.426431 waagent[1788]: 2024-09-04T17:27:24.416008Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 17:27:24.426431 waagent[1788]: 2024-09-04T17:27:24.416529Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 17:27:24.426431 waagent[1788]: 2024-09-04T17:27:24.417640Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 17:27:24.426431 waagent[1788]: 2024-09-04T17:27:24.418502Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 17:27:24.430024 waagent[1788]: 2024-09-04T17:27:24.428525Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 17:27:24.430024 waagent[1788]: 2024-09-04T17:27:24.429302Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 17:27:24.430172 waagent[1788]: 2024-09-04T17:27:24.430138Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 17:27:24.573242 systemd[1836]: Queued start job for default target default.target. Sep 4 17:27:24.582424 systemd[1836]: Created slice app.slice - User Application Slice. Sep 4 17:27:24.582575 systemd[1836]: Reached target paths.target - Paths. Sep 4 17:27:24.582665 systemd[1836]: Reached target timers.target - Timers. Sep 4 17:27:24.585960 systemd[1836]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:27:24.598767 systemd[1836]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:27:24.599805 systemd[1836]: Reached target sockets.target - Sockets. Sep 4 17:27:24.599964 systemd[1836]: Reached target basic.target - Basic System. Sep 4 17:27:24.600090 systemd[1836]: Reached target default.target - Main User Target. Sep 4 17:27:24.600193 systemd[1836]: Startup finished in 336ms. Sep 4 17:27:24.601015 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:27:24.606012 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:27:24.607085 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:27:24.707665 waagent[1788]: 2024-09-04T17:27:24.707518Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 17:27:24.713317 waagent[1788]: 2024-09-04T17:27:24.711665Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 17:27:24.716482 waagent[1788]: 2024-09-04T17:27:24.716428Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:27:24.748151 waagent[1788]: 2024-09-04T17:27:24.748093Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.154 Sep 4 17:27:24.768770 waagent[1788]: 2024-09-04T17:27:24.749968Z INFO Daemon Sep 4 17:27:24.768770 waagent[1788]: 2024-09-04T17:27:24.752336Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f463ec9a-91b8-47d8-a0ea-bf428cba1ceb eTag: 7043803916493591463 source: Fabric] Sep 4 17:27:24.768770 waagent[1788]: 2024-09-04T17:27:24.754147Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 17:27:24.768770 waagent[1788]: 2024-09-04T17:27:24.754833Z INFO Daemon Sep 4 17:27:24.768770 waagent[1788]: 2024-09-04T17:27:24.755001Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:27:24.768770 waagent[1788]: 2024-09-04T17:27:24.759270Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 17:27:24.841239 waagent[1788]: 2024-09-04T17:27:24.841183Z INFO Daemon Downloaded certificate {'thumbprint': '4DFF183B718FAB955C80408CC99C1080AFFF6045', 'hasPrivateKey': False} Sep 4 17:27:24.847163 waagent[1788]: 2024-09-04T17:27:24.847111Z INFO Daemon Downloaded certificate {'thumbprint': '047EF415D54C0C4241401CD6773ECDC67C8F7FC5', 'hasPrivateKey': True} Sep 4 17:27:24.853925 waagent[1788]: 2024-09-04T17:27:24.849063Z INFO Daemon Fetch goal state completed Sep 4 17:27:24.855645 waagent[1788]: 2024-09-04T17:27:24.855601Z INFO Daemon Daemon Starting provisioning Sep 4 17:27:24.863022 waagent[1788]: 2024-09-04T17:27:24.856830Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 17:27:24.863022 waagent[1788]: 2024-09-04T17:27:24.857872Z INFO Daemon Daemon Set hostname [ci-3975.2.1-a-1f7e34d344] Sep 4 17:27:24.863022 waagent[1788]: 2024-09-04T17:27:24.862913Z INFO Daemon Daemon Publish hostname [ci-3975.2.1-a-1f7e34d344] Sep 4 17:27:24.871325 waagent[1788]: 2024-09-04T17:27:24.864179Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 17:27:24.871325 waagent[1788]: 2024-09-04T17:27:24.864706Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 17:27:24.881067 systemd-networkd[1554]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:27:24.881075 systemd-networkd[1554]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:27:24.881115 systemd-networkd[1554]: eth0: DHCP lease lost Sep 4 17:27:24.882210 waagent[1788]: 2024-09-04T17:27:24.882147Z INFO Daemon Daemon Create user account if not exists Sep 4 17:27:24.899513 waagent[1788]: 2024-09-04T17:27:24.883308Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 17:27:24.899513 waagent[1788]: 2024-09-04T17:27:24.883824Z INFO Daemon Daemon Configure sudoer Sep 4 17:27:24.899513 waagent[1788]: 2024-09-04T17:27:24.884525Z INFO Daemon Daemon Configure sshd Sep 4 17:27:24.899513 waagent[1788]: 2024-09-04T17:27:24.885376Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 17:27:24.899513 waagent[1788]: 2024-09-04T17:27:24.886122Z INFO Daemon Daemon Deploy ssh public key. Sep 4 17:27:24.902133 systemd-networkd[1554]: eth0: DHCPv6 lease lost Sep 4 17:27:24.920943 systemd-networkd[1554]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 17:27:26.188597 waagent[1788]: 2024-09-04T17:27:26.188507Z INFO Daemon Daemon Provisioning complete Sep 4 17:27:26.201787 waagent[1788]: 2024-09-04T17:27:26.201736Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 17:27:26.209416 waagent[1788]: 2024-09-04T17:27:26.203141Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 17:27:26.209416 waagent[1788]: 2024-09-04T17:27:26.203591Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 17:27:26.326253 waagent[1877]: 2024-09-04T17:27:26.326166Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 17:27:26.326630 waagent[1877]: 2024-09-04T17:27:26.326319Z INFO ExtHandler ExtHandler OS: flatcar 3975.2.1 Sep 4 17:27:26.326630 waagent[1877]: 2024-09-04T17:27:26.326403Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 4 17:27:26.339346 waagent[1877]: 2024-09-04T17:27:26.339281Z INFO ExtHandler ExtHandler Distro: flatcar-3975.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 17:27:26.339519 waagent[1877]: 2024-09-04T17:27:26.339476Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:27:26.339604 waagent[1877]: 2024-09-04T17:27:26.339563Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:27:26.346462 waagent[1877]: 2024-09-04T17:27:26.346399Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:27:26.351488 waagent[1877]: 2024-09-04T17:27:26.351421Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.154 Sep 4 17:27:26.351940 waagent[1877]: 2024-09-04T17:27:26.351888Z INFO ExtHandler Sep 4 17:27:26.352034 waagent[1877]: 2024-09-04T17:27:26.351983Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2ecf595d-1bc4-43c4-bb32-a0892a5f7487 eTag: 7043803916493591463 source: Fabric] Sep 4 17:27:26.352338 waagent[1877]: 2024-09-04T17:27:26.352283Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 17:27:26.352865 waagent[1877]: 2024-09-04T17:27:26.352809Z INFO ExtHandler Sep 4 17:27:26.352955 waagent[1877]: 2024-09-04T17:27:26.352917Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:27:26.356122 waagent[1877]: 2024-09-04T17:27:26.356078Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 17:27:26.432731 waagent[1877]: 2024-09-04T17:27:26.432660Z INFO ExtHandler Downloaded certificate {'thumbprint': '4DFF183B718FAB955C80408CC99C1080AFFF6045', 'hasPrivateKey': False} Sep 4 17:27:26.433144 waagent[1877]: 2024-09-04T17:27:26.433090Z INFO ExtHandler Downloaded certificate {'thumbprint': '047EF415D54C0C4241401CD6773ECDC67C8F7FC5', 'hasPrivateKey': True} Sep 4 17:27:26.433550 waagent[1877]: 2024-09-04T17:27:26.433500Z INFO ExtHandler Fetch goal state completed Sep 4 17:27:26.448075 waagent[1877]: 2024-09-04T17:27:26.447973Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1877 Sep 4 17:27:26.448214 waagent[1877]: 2024-09-04T17:27:26.448133Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 17:27:26.449740 waagent[1877]: 2024-09-04T17:27:26.449690Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.2.1', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 17:27:26.450130 waagent[1877]: 2024-09-04T17:27:26.450079Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 17:27:26.462619 waagent[1877]: 2024-09-04T17:27:26.462581Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 17:27:26.462815 waagent[1877]: 2024-09-04T17:27:26.462772Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 17:27:26.469289 waagent[1877]: 2024-09-04T17:27:26.469117Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 17:27:26.475728 systemd[1]: Reloading requested from client PID 1892 ('systemctl') (unit waagent.service)... Sep 4 17:27:26.475743 systemd[1]: Reloading... Sep 4 17:27:26.559918 zram_generator::config[1926]: No configuration found. Sep 4 17:27:26.672098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:27:26.754986 systemd[1]: Reloading finished in 278 ms. Sep 4 17:27:26.781868 waagent[1877]: 2024-09-04T17:27:26.779388Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 17:27:26.787411 systemd[1]: Reloading requested from client PID 1980 ('systemctl') (unit waagent.service)... Sep 4 17:27:26.787517 systemd[1]: Reloading... Sep 4 17:27:26.854869 zram_generator::config[2009]: No configuration found. Sep 4 17:27:26.983553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:27:27.077133 systemd[1]: Reloading finished in 289 ms. Sep 4 17:27:27.104803 waagent[1877]: 2024-09-04T17:27:27.104699Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 17:27:27.105939 waagent[1877]: 2024-09-04T17:27:27.104935Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 17:27:27.951569 waagent[1877]: 2024-09-04T17:27:27.951479Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 17:27:27.952251 waagent[1877]: 2024-09-04T17:27:27.952188Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 17:27:27.953014 waagent[1877]: 2024-09-04T17:27:27.952955Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 17:27:27.953151 waagent[1877]: 2024-09-04T17:27:27.953091Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:27:27.953590 waagent[1877]: 2024-09-04T17:27:27.953540Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 17:27:27.953659 waagent[1877]: 2024-09-04T17:27:27.953617Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:27:27.954016 waagent[1877]: 2024-09-04T17:27:27.953952Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 17:27:27.954124 waagent[1877]: 2024-09-04T17:27:27.954032Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:27:27.954411 waagent[1877]: 2024-09-04T17:27:27.954360Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 17:27:27.954479 waagent[1877]: 2024-09-04T17:27:27.954415Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:27:27.954610 waagent[1877]: 2024-09-04T17:27:27.954555Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 17:27:27.954917 waagent[1877]: 2024-09-04T17:27:27.954808Z INFO EnvHandler ExtHandler Configure routes Sep 4 17:27:27.955128 waagent[1877]: 2024-09-04T17:27:27.955090Z INFO EnvHandler ExtHandler Gateway:None Sep 4 17:27:27.955412 waagent[1877]: 2024-09-04T17:27:27.955366Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 17:27:27.955762 waagent[1877]: 2024-09-04T17:27:27.955721Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 17:27:27.955871 waagent[1877]: 2024-09-04T17:27:27.955809Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 17:27:27.955871 waagent[1877]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 17:27:27.955871 waagent[1877]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 17:27:27.955871 waagent[1877]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 17:27:27.955871 waagent[1877]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:27:27.955871 waagent[1877]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:27:27.955871 waagent[1877]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:27:27.956355 waagent[1877]: 2024-09-04T17:27:27.956220Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 17:27:27.956833 waagent[1877]: 2024-09-04T17:27:27.956748Z INFO EnvHandler ExtHandler Routes:None Sep 4 17:27:27.965873 waagent[1877]: 2024-09-04T17:27:27.963979Z INFO ExtHandler ExtHandler Sep 4 17:27:27.965873 waagent[1877]: 2024-09-04T17:27:27.964089Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 17dd2dae-632b-4d7b-9486-f96555d9ef82 correlation 035bb3f0-598a-491d-bfbf-9b18a4b8eed6 created: 2024-09-04T17:25:24.155663Z] Sep 4 17:27:27.965873 waagent[1877]: 2024-09-04T17:27:27.964525Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 17:27:27.965873 waagent[1877]: 2024-09-04T17:27:27.965347Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 4 17:27:27.983103 waagent[1877]: 2024-09-04T17:27:27.983052Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 17:27:27.983103 waagent[1877]: Executing ['ip', '-a', '-o', 'link']: Sep 4 17:27:27.983103 waagent[1877]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 17:27:27.983103 waagent[1877]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:c1:3a brd ff:ff:ff:ff:ff:ff Sep 4 17:27:27.983103 waagent[1877]: 3: enP21117s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:c1:3a brd ff:ff:ff:ff:ff:ff\ altname enP21117p0s2 Sep 4 17:27:27.983103 waagent[1877]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 17:27:27.983103 waagent[1877]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 17:27:27.983103 waagent[1877]: 2: eth0 inet 10.200.8.42/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 17:27:27.983103 waagent[1877]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 17:27:27.983103 waagent[1877]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 17:27:27.983103 waagent[1877]: 2: eth0 inet6 fe80::20d:3aff:feb3:c13a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:27:27.983103 waagent[1877]: 3: enP21117s1 inet6 fe80::20d:3aff:feb3:c13a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:27:28.002680 waagent[1877]: 2024-09-04T17:27:28.002634Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C4D2339D-053B-4685-BDFF-3D91B788AC4F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 17:27:28.015549 waagent[1877]: 2024-09-04T17:27:28.015496Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 17:27:28.015549 waagent[1877]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:27:28.015549 waagent[1877]: pkts bytes target prot opt in out source destination Sep 4 17:27:28.015549 waagent[1877]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:27:28.015549 waagent[1877]: pkts bytes target prot opt in out source destination Sep 4 17:27:28.015549 waagent[1877]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:27:28.015549 waagent[1877]: pkts bytes target prot opt in out source destination Sep 4 17:27:28.015549 waagent[1877]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:27:28.015549 waagent[1877]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:27:28.015549 waagent[1877]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:27:28.018583 waagent[1877]: 2024-09-04T17:27:28.018528Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 17:27:28.018583 waagent[1877]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:27:28.018583 waagent[1877]: pkts bytes target prot opt in out source destination Sep 4 17:27:28.018583 waagent[1877]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:27:28.018583 waagent[1877]: pkts bytes target prot opt in out source destination Sep 4 17:27:28.018583 waagent[1877]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:27:28.018583 waagent[1877]: pkts bytes target prot opt in out source destination Sep 4 17:27:28.018583 waagent[1877]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:27:28.018583 waagent[1877]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:27:28.018583 waagent[1877]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:27:28.019087 waagent[1877]: 2024-09-04T17:27:28.018812Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 17:27:34.310675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:27:34.320125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:34.408713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:34.413018 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:27:34.989052 kubelet[2107]: E0904 17:27:34.988991 2107 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:27:34.993131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:27:34.993344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:27:45.060911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:27:45.066110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:45.155773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:45.160003 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:27:45.702260 chronyd[1700]: Selected source PHC0 Sep 4 17:27:45.714106 kubelet[2123]: E0904 17:27:45.714052 2123 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:27:45.716495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:27:45.716693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:27:55.811125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:27:55.818048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:55.905565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:55.910216 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:27:56.454479 kubelet[2142]: E0904 17:27:56.454419 2142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:27:56.457054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:27:56.457276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:28:01.246239 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:28:01.251128 systemd[1]: Started sshd@0-10.200.8.42:22-10.200.16.10:60394.service - OpenSSH per-connection server daemon (10.200.16.10:60394). Sep 4 17:28:01.536705 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 4 17:28:01.906041 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 60394 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:28:01.907720 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:01.913165 systemd-logind[1666]: New session 3 of user core. Sep 4 17:28:01.920012 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:28:02.466694 systemd[1]: Started sshd@1-10.200.8.42:22-10.200.16.10:60398.service - OpenSSH per-connection server daemon (10.200.16.10:60398). Sep 4 17:28:03.107963 sshd[2156]: Accepted publickey for core from 10.200.16.10 port 60398 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:28:03.109343 sshd[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:03.113185 systemd-logind[1666]: New session 4 of user core. Sep 4 17:28:03.120034 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:28:03.582602 sshd[2156]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:03.586928 systemd[1]: sshd@1-10.200.8.42:22-10.200.16.10:60398.service: Deactivated successfully. Sep 4 17:28:03.589117 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:28:03.589960 systemd-logind[1666]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:28:03.591190 systemd-logind[1666]: Removed session 4. Sep 4 17:28:03.692978 systemd[1]: Started sshd@2-10.200.8.42:22-10.200.16.10:60410.service - OpenSSH per-connection server daemon (10.200.16.10:60410). Sep 4 17:28:04.311349 sshd[2163]: Accepted publickey for core from 10.200.16.10 port 60410 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:28:04.313034 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:04.318424 systemd-logind[1666]: New session 5 of user core. Sep 4 17:28:04.324993 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:28:04.756364 sshd[2163]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:04.759580 systemd[1]: sshd@2-10.200.8.42:22-10.200.16.10:60410.service: Deactivated successfully. Sep 4 17:28:04.761475 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:28:04.762962 systemd-logind[1666]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:28:04.763894 systemd-logind[1666]: Removed session 5. Sep 4 17:28:04.867017 systemd[1]: Started sshd@3-10.200.8.42:22-10.200.16.10:60426.service - OpenSSH per-connection server daemon (10.200.16.10:60426). Sep 4 17:28:05.495468 sshd[2170]: Accepted publickey for core from 10.200.16.10 port 60426 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:28:05.497144 sshd[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:05.502414 systemd-logind[1666]: New session 6 of user core. Sep 4 17:28:05.509259 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:28:05.938960 sshd[2170]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:05.942682 systemd[1]: sshd@3-10.200.8.42:22-10.200.16.10:60426.service: Deactivated successfully. Sep 4 17:28:05.944430 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:28:05.945075 systemd-logind[1666]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:28:05.945942 systemd-logind[1666]: Removed session 6. Sep 4 17:28:06.049271 systemd[1]: Started sshd@4-10.200.8.42:22-10.200.16.10:60432.service - OpenSSH per-connection server daemon (10.200.16.10:60432). Sep 4 17:28:06.560838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 17:28:06.574085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:28:06.682436 sshd[2177]: Accepted publickey for core from 10.200.16.10 port 60432 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:28:06.684633 sshd[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:06.691873 systemd-logind[1666]: New session 7 of user core. Sep 4 17:28:06.704146 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:28:06.732982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:06.741144 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:28:06.781783 kubelet[2188]: E0904 17:28:06.781736 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:28:06.784019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:28:06.784198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:28:07.177605 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:28:07.178043 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:28:07.193512 sudo[2196]: pam_unix(sudo:session): session closed for user root Sep 4 17:28:07.283789 update_engine[1667]: I0904 17:28:07.283664 1667 update_attempter.cc:509] Updating boot flags... Sep 4 17:28:07.295922 sshd[2177]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:07.302286 systemd-logind[1666]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:28:07.303234 systemd[1]: sshd@4-10.200.8.42:22-10.200.16.10:60432.service: Deactivated successfully. Sep 4 17:28:07.306383 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:28:07.312684 systemd-logind[1666]: Removed session 7. Sep 4 17:28:07.338917 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2212) Sep 4 17:28:07.420215 systemd[1]: Started sshd@5-10.200.8.42:22-10.200.16.10:60436.service - OpenSSH per-connection server daemon (10.200.16.10:60436). Sep 4 17:28:07.452868 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2205) Sep 4 17:28:07.548869 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2205) Sep 4 17:28:08.041573 sshd[2242]: Accepted publickey for core from 10.200.16.10 port 60436 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:28:08.043215 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:08.047174 systemd-logind[1666]: New session 8 of user core. Sep 4 17:28:08.056995 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:28:08.385357 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:28:08.385692 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:28:08.388821 sudo[2298]: pam_unix(sudo:session): session closed for user root Sep 4 17:28:08.393742 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:28:08.394087 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:28:08.412341 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:28:08.413829 auditctl[2301]: No rules Sep 4 17:28:08.414914 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:28:08.415137 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:28:08.416798 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:28:08.440285 augenrules[2319]: No rules Sep 4 17:28:08.441466 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:28:08.442571 sudo[2297]: pam_unix(sudo:session): session closed for user root Sep 4 17:28:08.545010 sshd[2242]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:08.548208 systemd[1]: sshd@5-10.200.8.42:22-10.200.16.10:60436.service: Deactivated successfully. Sep 4 17:28:08.550194 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:28:08.551977 systemd-logind[1666]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:28:08.552993 systemd-logind[1666]: Removed session 8. Sep 4 17:28:08.659367 systemd[1]: Started sshd@6-10.200.8.42:22-10.200.16.10:51668.service - OpenSSH per-connection server daemon (10.200.16.10:51668). Sep 4 17:28:09.283552 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 51668 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:28:09.285109 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:09.288994 systemd-logind[1666]: New session 9 of user core. Sep 4 17:28:09.296009 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:28:09.629030 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:28:09.629442 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:28:09.789142 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:28:09.790783 (dockerd)[2339]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:28:10.164998 dockerd[2339]: time="2024-09-04T17:28:10.164940027Z" level=info msg="Starting up" Sep 4 17:28:10.288182 dockerd[2339]: time="2024-09-04T17:28:10.287755460Z" level=info msg="Loading containers: start." Sep 4 17:28:10.398870 kernel: Initializing XFRM netlink socket Sep 4 17:28:10.456494 systemd-networkd[1554]: docker0: Link UP Sep 4 17:28:10.484407 dockerd[2339]: time="2024-09-04T17:28:10.484373694Z" level=info msg="Loading containers: done." Sep 4 17:28:10.588468 dockerd[2339]: time="2024-09-04T17:28:10.588423923Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:28:10.588664 dockerd[2339]: time="2024-09-04T17:28:10.588628425Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:28:10.588765 dockerd[2339]: time="2024-09-04T17:28:10.588741626Z" level=info msg="Daemon has completed initialization" Sep 4 17:28:10.638449 dockerd[2339]: time="2024-09-04T17:28:10.638400865Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:28:10.638769 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:28:11.804952 containerd[1690]: time="2024-09-04T17:28:11.804912026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:28:12.455869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1247664590.mount: Deactivated successfully. Sep 4 17:28:14.144835 containerd[1690]: time="2024-09-04T17:28:14.144773520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:14.150072 containerd[1690]: time="2024-09-04T17:28:14.149904876Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530743" Sep 4 17:28:14.155145 containerd[1690]: time="2024-09-04T17:28:14.155096532Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:14.161144 containerd[1690]: time="2024-09-04T17:28:14.161095897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:14.162270 containerd[1690]: time="2024-09-04T17:28:14.162093308Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 2.357140682s" Sep 4 17:28:14.162270 containerd[1690]: time="2024-09-04T17:28:14.162132508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 17:28:14.182822 containerd[1690]: time="2024-09-04T17:28:14.182795733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:28:16.145724 containerd[1690]: time="2024-09-04T17:28:16.145667536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:16.149586 containerd[1690]: time="2024-09-04T17:28:16.149439877Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849717" Sep 4 17:28:16.154705 containerd[1690]: time="2024-09-04T17:28:16.154639733Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:16.165607 containerd[1690]: time="2024-09-04T17:28:16.165429950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:16.166732 containerd[1690]: time="2024-09-04T17:28:16.166601063Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 1.98377183s" Sep 4 17:28:16.166732 containerd[1690]: time="2024-09-04T17:28:16.166638063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 17:28:16.187359 containerd[1690]: time="2024-09-04T17:28:16.187333188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:28:16.810914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 17:28:16.818980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:28:16.939983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:16.953164 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:28:17.615145 kubelet[2540]: E0904 17:28:17.615048 2540 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:28:17.617665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:28:17.618798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:28:18.014515 containerd[1690]: time="2024-09-04T17:28:18.014394167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:18.018777 containerd[1690]: time="2024-09-04T17:28:18.018655604Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097785" Sep 4 17:28:18.023093 containerd[1690]: time="2024-09-04T17:28:18.022832941Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:18.029072 containerd[1690]: time="2024-09-04T17:28:18.028933794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:18.030491 containerd[1690]: time="2024-09-04T17:28:18.030104304Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.842738216s" Sep 4 17:28:18.030491 containerd[1690]: time="2024-09-04T17:28:18.030141405Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 17:28:18.049925 containerd[1690]: time="2024-09-04T17:28:18.049897577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:28:19.097280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4005309859.mount: Deactivated successfully. Sep 4 17:28:19.527168 containerd[1690]: time="2024-09-04T17:28:19.527041895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:19.529269 containerd[1690]: time="2024-09-04T17:28:19.529092313Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303457" Sep 4 17:28:19.533382 containerd[1690]: time="2024-09-04T17:28:19.533235449Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:19.539121 containerd[1690]: time="2024-09-04T17:28:19.539070900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:19.539826 containerd[1690]: time="2024-09-04T17:28:19.539679206Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 1.489745627s" Sep 4 17:28:19.539826 containerd[1690]: time="2024-09-04T17:28:19.539720806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 17:28:19.559671 containerd[1690]: time="2024-09-04T17:28:19.559635180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:28:20.156095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909618436.mount: Deactivated successfully. Sep 4 17:28:20.182152 containerd[1690]: time="2024-09-04T17:28:20.182097323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:20.184798 containerd[1690]: time="2024-09-04T17:28:20.184729346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Sep 4 17:28:20.190587 containerd[1690]: time="2024-09-04T17:28:20.190536097Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:20.195639 containerd[1690]: time="2024-09-04T17:28:20.195587841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:20.198811 containerd[1690]: time="2024-09-04T17:28:20.198269965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 638.592385ms" Sep 4 17:28:20.198811 containerd[1690]: time="2024-09-04T17:28:20.198310765Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:28:20.220759 containerd[1690]: time="2024-09-04T17:28:20.220730761Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:28:20.701402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153565159.mount: Deactivated successfully. Sep 4 17:28:22.907919 containerd[1690]: time="2024-09-04T17:28:22.907789259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:22.911697 containerd[1690]: time="2024-09-04T17:28:22.911558292Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Sep 4 17:28:22.916321 containerd[1690]: time="2024-09-04T17:28:22.916273134Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:22.921434 containerd[1690]: time="2024-09-04T17:28:22.921301078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:22.922911 containerd[1690]: time="2024-09-04T17:28:22.922542988Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.701780027s" Sep 4 17:28:22.922911 containerd[1690]: time="2024-09-04T17:28:22.922580189Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:28:22.942744 containerd[1690]: time="2024-09-04T17:28:22.942717065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:28:23.396359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861976209.mount: Deactivated successfully. Sep 4 17:28:23.947786 containerd[1690]: time="2024-09-04T17:28:23.947674953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:23.950027 containerd[1690]: time="2024-09-04T17:28:23.949966173Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Sep 4 17:28:23.954274 containerd[1690]: time="2024-09-04T17:28:23.954135910Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:23.960992 containerd[1690]: time="2024-09-04T17:28:23.960943569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:23.961828 containerd[1690]: time="2024-09-04T17:28:23.961698076Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.018916811s" Sep 4 17:28:23.961828 containerd[1690]: time="2024-09-04T17:28:23.961739176Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 17:28:26.366357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:26.372111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:28:26.399171 systemd[1]: Reloading requested from client PID 2701 ('systemctl') (unit session-9.scope)... Sep 4 17:28:26.399329 systemd[1]: Reloading... Sep 4 17:28:26.521937 zram_generator::config[2738]: No configuration found. Sep 4 17:28:26.632654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:28:26.711610 systemd[1]: Reloading finished in 311 ms. Sep 4 17:28:26.762011 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:28:26.762106 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:28:26.762365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:26.765124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:28:26.974204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:26.979686 (kubelet)[2809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:28:27.636599 kubelet[2809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:28:27.636599 kubelet[2809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:28:27.636599 kubelet[2809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:28:27.637132 kubelet[2809]: I0904 17:28:27.636664 2809 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:28:28.059289 kubelet[2809]: I0904 17:28:28.059186 2809 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:28:28.059289 kubelet[2809]: I0904 17:28:28.059211 2809 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:28:28.059753 kubelet[2809]: I0904 17:28:28.059476 2809 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:28:28.155932 kubelet[2809]: E0904 17:28:28.155896 2809 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.157272 kubelet[2809]: I0904 17:28:28.157120 2809 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:28:28.199099 kubelet[2809]: I0904 17:28:28.199076 2809 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:28:28.250189 kubelet[2809]: I0904 17:28:28.250148 2809 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:28:28.250438 kubelet[2809]: I0904 17:28:28.250405 2809 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:28:28.297759 kubelet[2809]: I0904 17:28:28.297721 2809 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:28:28.297759 kubelet[2809]: I0904 17:28:28.297759 2809 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:28:28.298839 kubelet[2809]: I0904 17:28:28.298806 2809 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:28:28.300786 kubelet[2809]: I0904 17:28:28.300593 2809 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:28:28.300786 kubelet[2809]: I0904 17:28:28.300620 2809 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:28:28.300786 kubelet[2809]: I0904 17:28:28.300651 2809 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:28:28.300786 kubelet[2809]: I0904 17:28:28.300668 2809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:28:28.305329 kubelet[2809]: W0904 17:28:28.302551 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.305329 kubelet[2809]: E0904 17:28:28.302604 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.305329 kubelet[2809]: W0904 17:28:28.302673 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-1f7e34d344&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.305329 kubelet[2809]: E0904 17:28:28.302726 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-1f7e34d344&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.305984 kubelet[2809]: I0904 17:28:28.305694 2809 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:28:28.311580 kubelet[2809]: W0904 17:28:28.309720 2809 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:28:28.311580 kubelet[2809]: I0904 17:28:28.311369 2809 server.go:1232] "Started kubelet" Sep 4 17:28:28.317869 kubelet[2809]: E0904 17:28:28.317019 2809 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:28:28.317869 kubelet[2809]: E0904 17:28:28.317048 2809 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:28:28.317869 kubelet[2809]: E0904 17:28:28.317242 2809 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.2.1-a-1f7e34d344.17f21aa6302b1976", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.2.1-a-1f7e34d344", UID:"ci-3975.2.1-a-1f7e34d344", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.2.1-a-1f7e34d344"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 28, 28, 311345526, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 28, 28, 311345526, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.2.1-a-1f7e34d344"}': 'Post "https://10.200.8.42:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.42:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:28:28.317869 kubelet[2809]: I0904 17:28:28.317395 2809 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:28:28.318104 kubelet[2809]: I0904 17:28:28.317554 2809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:28:28.318104 kubelet[2809]: I0904 17:28:28.317665 2809 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:28:28.318104 kubelet[2809]: I0904 17:28:28.317725 2809 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:28:28.319050 kubelet[2809]: I0904 17:28:28.319034 2809 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:28:28.323682 kubelet[2809]: E0904 17:28:28.323666 2809 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-1f7e34d344\" not found" Sep 4 17:28:28.323808 kubelet[2809]: I0904 17:28:28.323799 2809 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:28:28.323974 kubelet[2809]: I0904 17:28:28.323961 2809 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:28:28.324106 kubelet[2809]: I0904 17:28:28.324096 2809 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:28:28.324511 kubelet[2809]: W0904 17:28:28.324472 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.324615 kubelet[2809]: E0904 17:28:28.324605 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.325237 kubelet[2809]: E0904 17:28:28.325219 2809 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-1f7e34d344?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="200ms" Sep 4 17:28:28.369642 kubelet[2809]: I0904 17:28:28.369614 2809 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:28:28.369642 kubelet[2809]: I0904 17:28:28.369642 2809 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:28:28.369810 kubelet[2809]: I0904 17:28:28.369664 2809 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:28:28.425938 kubelet[2809]: I0904 17:28:28.425909 2809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:28.426312 kubelet[2809]: E0904 17:28:28.426277 2809 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:28.444129 kubelet[2809]: I0904 17:28:28.444075 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:28:28.446413 kubelet[2809]: I0904 17:28:28.446333 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:28:28.446413 kubelet[2809]: I0904 17:28:28.446381 2809 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:28:28.446413 kubelet[2809]: I0904 17:28:28.446405 2809 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:28:28.446565 kubelet[2809]: E0904 17:28:28.446469 2809 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:28:28.448099 kubelet[2809]: W0904 17:28:28.447484 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.448099 kubelet[2809]: E0904 17:28:28.447519 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:28.471552 kubelet[2809]: E0904 17:28:28.471463 2809 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.2.1-a-1f7e34d344.17f21aa6302b1976", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.2.1-a-1f7e34d344", UID:"ci-3975.2.1-a-1f7e34d344", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.2.1-a-1f7e34d344"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 28, 28, 311345526, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 28, 28, 311345526, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.2.1-a-1f7e34d344"}': 'Post "https://10.200.8.42:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.42:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:28:28.526682 kubelet[2809]: E0904 17:28:28.526653 2809 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-1f7e34d344?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="400ms" Sep 4 17:28:28.747913 kubelet[2809]: E0904 17:28:28.546727 2809 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:28:28.747913 kubelet[2809]: I0904 17:28:28.629817 2809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:28.747913 kubelet[2809]: E0904 17:28:28.630190 2809 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:28.749976 kubelet[2809]: E0904 17:28:28.749918 2809 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:28:28.789664 kubelet[2809]: I0904 17:28:28.789626 2809 policy_none.go:49] "None policy: Start" Sep 4 17:28:28.790671 kubelet[2809]: I0904 17:28:28.790616 2809 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:28:28.790671 kubelet[2809]: I0904 17:28:28.790653 2809 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:28:28.927439 kubelet[2809]: E0904 17:28:28.927408 2809 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-1f7e34d344?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="800ms" Sep 4 17:28:29.033214 kubelet[2809]: I0904 17:28:29.033120 2809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:29.033465 kubelet[2809]: E0904 17:28:29.033432 2809 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:29.150523 kubelet[2809]: E0904 17:28:29.150428 2809 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:28:29.301214 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:28:29.310684 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:28:29.314656 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:28:29.322436 kubelet[2809]: I0904 17:28:29.321812 2809 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:28:29.322436 kubelet[2809]: I0904 17:28:29.322136 2809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:28:29.323356 kubelet[2809]: E0904 17:28:29.323171 2809 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.1-a-1f7e34d344\" not found" Sep 4 17:28:29.532443 kubelet[2809]: W0904 17:28:29.532373 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:29.532443 kubelet[2809]: E0904 17:28:29.532420 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:29.537251 kubelet[2809]: W0904 17:28:29.537185 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:29.537337 kubelet[2809]: E0904 17:28:29.537261 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:29.728257 kubelet[2809]: E0904 17:28:29.728174 2809 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-1f7e34d344?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="1.6s" Sep 4 17:28:29.738043 kubelet[2809]: W0904 17:28:29.737997 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-1f7e34d344&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:29.738142 kubelet[2809]: E0904 17:28:29.738050 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-1f7e34d344&limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:29.836314 kubelet[2809]: I0904 17:28:29.836270 2809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:29.836807 kubelet[2809]: E0904 17:28:29.836650 2809 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:29.868347 kubelet[2809]: W0904 17:28:29.868257 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:29.868347 kubelet[2809]: E0904 17:28:29.868351 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:29.951417 kubelet[2809]: I0904 17:28:29.951331 2809 topology_manager.go:215] "Topology Admit Handler" podUID="c680416be14766d52137073cd99c48d8" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:29.953477 kubelet[2809]: I0904 17:28:29.953454 2809 topology_manager.go:215] "Topology Admit Handler" podUID="1c873a763f5cfeb568d03b5d3ffc37cd" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:29.955069 kubelet[2809]: I0904 17:28:29.954834 2809 topology_manager.go:215] "Topology Admit Handler" podUID="468efe39c038367e55086d192b863d84" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:29.964630 systemd[1]: Created slice kubepods-burstable-podc680416be14766d52137073cd99c48d8.slice - libcontainer container kubepods-burstable-podc680416be14766d52137073cd99c48d8.slice. Sep 4 17:28:29.983808 systemd[1]: Created slice kubepods-burstable-pod1c873a763f5cfeb568d03b5d3ffc37cd.slice - libcontainer container kubepods-burstable-pod1c873a763f5cfeb568d03b5d3ffc37cd.slice. Sep 4 17:28:29.988354 systemd[1]: Created slice kubepods-burstable-pod468efe39c038367e55086d192b863d84.slice - libcontainer container kubepods-burstable-pod468efe39c038367e55086d192b863d84.slice. Sep 4 17:28:30.033672 kubelet[2809]: I0904 17:28:30.033642 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c680416be14766d52137073cd99c48d8-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-a-1f7e34d344\" (UID: \"c680416be14766d52137073cd99c48d8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.033672 kubelet[2809]: I0904 17:28:30.033697 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c680416be14766d52137073cd99c48d8-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-a-1f7e34d344\" (UID: \"c680416be14766d52137073cd99c48d8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.034056 kubelet[2809]: I0904 17:28:30.033734 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/468efe39c038367e55086d192b863d84-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-a-1f7e34d344\" (UID: \"468efe39c038367e55086d192b863d84\") " pod="kube-system/kube-scheduler-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.034056 kubelet[2809]: I0904 17:28:30.033768 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.034056 kubelet[2809]: I0904 17:28:30.033803 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.034056 kubelet[2809]: I0904 17:28:30.033842 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.034056 kubelet[2809]: I0904 17:28:30.033905 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c680416be14766d52137073cd99c48d8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-a-1f7e34d344\" (UID: \"c680416be14766d52137073cd99c48d8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.034242 kubelet[2809]: I0904 17:28:30.033940 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.034242 kubelet[2809]: I0904 17:28:30.033975 2809 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:30.282504 containerd[1690]: time="2024-09-04T17:28:30.282361484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-a-1f7e34d344,Uid:c680416be14766d52137073cd99c48d8,Namespace:kube-system,Attempt:0,}" Sep 4 17:28:30.288551 containerd[1690]: time="2024-09-04T17:28:30.288236773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-a-1f7e34d344,Uid:1c873a763f5cfeb568d03b5d3ffc37cd,Namespace:kube-system,Attempt:0,}" Sep 4 17:28:30.291689 containerd[1690]: time="2024-09-04T17:28:30.291651125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-a-1f7e34d344,Uid:468efe39c038367e55086d192b863d84,Namespace:kube-system,Attempt:0,}" Sep 4 17:28:30.335361 kubelet[2809]: E0904 17:28:30.335320 2809 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:30.895594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617168053.mount: Deactivated successfully. Sep 4 17:28:30.927504 containerd[1690]: time="2024-09-04T17:28:30.927324654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:28:30.930820 containerd[1690]: time="2024-09-04T17:28:30.930770307Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:28:30.934344 containerd[1690]: time="2024-09-04T17:28:30.934294860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 4 17:28:30.938344 containerd[1690]: time="2024-09-04T17:28:30.938294121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:28:30.940522 containerd[1690]: time="2024-09-04T17:28:30.940471454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:28:30.945478 containerd[1690]: time="2024-09-04T17:28:30.945443529Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:28:30.948356 containerd[1690]: time="2024-09-04T17:28:30.948169870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:28:30.954032 containerd[1690]: time="2024-09-04T17:28:30.953840456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:28:30.955601 containerd[1690]: time="2024-09-04T17:28:30.955066775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.583188ms" Sep 4 17:28:30.956593 containerd[1690]: time="2024-09-04T17:28:30.956559097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.831971ms" Sep 4 17:28:30.960300 containerd[1690]: time="2024-09-04T17:28:30.960266753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 671.925878ms" Sep 4 17:28:31.263998 containerd[1690]: time="2024-09-04T17:28:31.262020925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:31.264987 containerd[1690]: time="2024-09-04T17:28:31.264049755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:31.264987 containerd[1690]: time="2024-09-04T17:28:31.264105756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:31.264987 containerd[1690]: time="2024-09-04T17:28:31.264128956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:31.264987 containerd[1690]: time="2024-09-04T17:28:31.264163157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:31.264987 containerd[1690]: time="2024-09-04T17:28:31.264915068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:31.264987 containerd[1690]: time="2024-09-04T17:28:31.264947169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:31.264987 containerd[1690]: time="2024-09-04T17:28:31.264962569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:31.269485 containerd[1690]: time="2024-09-04T17:28:31.267790512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:31.269485 containerd[1690]: time="2024-09-04T17:28:31.268870128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:31.269485 containerd[1690]: time="2024-09-04T17:28:31.268895829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:31.269485 containerd[1690]: time="2024-09-04T17:28:31.269085632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:31.308207 systemd[1]: Started cri-containerd-35a952e08ad36f7ba0b6d5582d584dabbe51732fb04ed390f0f04840336b00cf.scope - libcontainer container 35a952e08ad36f7ba0b6d5582d584dabbe51732fb04ed390f0f04840336b00cf. Sep 4 17:28:31.309914 systemd[1]: Started cri-containerd-4dd74f0484b54a508f3e99051f6ed6adfec9bb98925f113869c48e9ee8113cf2.scope - libcontainer container 4dd74f0484b54a508f3e99051f6ed6adfec9bb98925f113869c48e9ee8113cf2. Sep 4 17:28:31.314242 systemd[1]: Started cri-containerd-a993bdbb2a4b4c0912912f4c74b18d74dd09d8c8847d45c5d44325e0938d9d1a.scope - libcontainer container a993bdbb2a4b4c0912912f4c74b18d74dd09d8c8847d45c5d44325e0938d9d1a. Sep 4 17:28:31.329657 kubelet[2809]: E0904 17:28:31.329632 2809 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-1f7e34d344?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="3.2s" Sep 4 17:28:31.381010 containerd[1690]: time="2024-09-04T17:28:31.380863725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-a-1f7e34d344,Uid:1c873a763f5cfeb568d03b5d3ffc37cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"35a952e08ad36f7ba0b6d5582d584dabbe51732fb04ed390f0f04840336b00cf\"" Sep 4 17:28:31.388681 containerd[1690]: time="2024-09-04T17:28:31.388651943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-a-1f7e34d344,Uid:468efe39c038367e55086d192b863d84,Namespace:kube-system,Attempt:0,} returns sandbox id \"a993bdbb2a4b4c0912912f4c74b18d74dd09d8c8847d45c5d44325e0938d9d1a\"" Sep 4 17:28:31.392884 containerd[1690]: time="2024-09-04T17:28:31.391830691Z" level=info msg="CreateContainer within sandbox \"35a952e08ad36f7ba0b6d5582d584dabbe51732fb04ed390f0f04840336b00cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:28:31.394428 containerd[1690]: time="2024-09-04T17:28:31.394398830Z" level=info msg="CreateContainer within sandbox \"a993bdbb2a4b4c0912912f4c74b18d74dd09d8c8847d45c5d44325e0938d9d1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:28:31.401405 containerd[1690]: time="2024-09-04T17:28:31.401368335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-a-1f7e34d344,Uid:c680416be14766d52137073cd99c48d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dd74f0484b54a508f3e99051f6ed6adfec9bb98925f113869c48e9ee8113cf2\"" Sep 4 17:28:31.406335 containerd[1690]: time="2024-09-04T17:28:31.406303310Z" level=info msg="CreateContainer within sandbox \"4dd74f0484b54a508f3e99051f6ed6adfec9bb98925f113869c48e9ee8113cf2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:28:31.439282 kubelet[2809]: I0904 17:28:31.439059 2809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:31.439439 kubelet[2809]: E0904 17:28:31.439428 2809 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:31.459136 containerd[1690]: time="2024-09-04T17:28:31.459049709Z" level=info msg="CreateContainer within sandbox \"35a952e08ad36f7ba0b6d5582d584dabbe51732fb04ed390f0f04840336b00cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"795be017e4c4d836db92dbe26d827cb37c387dc3e511fa652261db9ee1577f0b\"" Sep 4 17:28:31.459996 containerd[1690]: time="2024-09-04T17:28:31.459970223Z" level=info msg="StartContainer for \"795be017e4c4d836db92dbe26d827cb37c387dc3e511fa652261db9ee1577f0b\"" Sep 4 17:28:31.473326 containerd[1690]: time="2024-09-04T17:28:31.473230024Z" level=info msg="CreateContainer within sandbox \"4dd74f0484b54a508f3e99051f6ed6adfec9bb98925f113869c48e9ee8113cf2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"832df0f3d7d52fd6282ef6bdc33260f81ea264e1cf7e8a434dd34a57bc5493be\"" Sep 4 17:28:31.474897 containerd[1690]: time="2024-09-04T17:28:31.473678931Z" level=info msg="StartContainer for \"832df0f3d7d52fd6282ef6bdc33260f81ea264e1cf7e8a434dd34a57bc5493be\"" Sep 4 17:28:31.478350 containerd[1690]: time="2024-09-04T17:28:31.478318301Z" level=info msg="CreateContainer within sandbox \"a993bdbb2a4b4c0912912f4c74b18d74dd09d8c8847d45c5d44325e0938d9d1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8bb125c54497648c119846156a8c707a87d69bdc98bbad97e4c6bddac31bdfd\"" Sep 4 17:28:31.479945 containerd[1690]: time="2024-09-04T17:28:31.479912225Z" level=info msg="StartContainer for \"c8bb125c54497648c119846156a8c707a87d69bdc98bbad97e4c6bddac31bdfd\"" Sep 4 17:28:31.490554 systemd[1]: Started cri-containerd-795be017e4c4d836db92dbe26d827cb37c387dc3e511fa652261db9ee1577f0b.scope - libcontainer container 795be017e4c4d836db92dbe26d827cb37c387dc3e511fa652261db9ee1577f0b. Sep 4 17:28:31.521285 systemd[1]: Started cri-containerd-832df0f3d7d52fd6282ef6bdc33260f81ea264e1cf7e8a434dd34a57bc5493be.scope - libcontainer container 832df0f3d7d52fd6282ef6bdc33260f81ea264e1cf7e8a434dd34a57bc5493be. Sep 4 17:28:31.535022 systemd[1]: Started cri-containerd-c8bb125c54497648c119846156a8c707a87d69bdc98bbad97e4c6bddac31bdfd.scope - libcontainer container c8bb125c54497648c119846156a8c707a87d69bdc98bbad97e4c6bddac31bdfd. Sep 4 17:28:31.571318 containerd[1690]: time="2024-09-04T17:28:31.570617399Z" level=info msg="StartContainer for \"795be017e4c4d836db92dbe26d827cb37c387dc3e511fa652261db9ee1577f0b\" returns successfully" Sep 4 17:28:31.604943 containerd[1690]: time="2024-09-04T17:28:31.604906619Z" level=info msg="StartContainer for \"832df0f3d7d52fd6282ef6bdc33260f81ea264e1cf7e8a434dd34a57bc5493be\" returns successfully" Sep 4 17:28:31.634831 containerd[1690]: time="2024-09-04T17:28:31.634327164Z" level=info msg="StartContainer for \"c8bb125c54497648c119846156a8c707a87d69bdc98bbad97e4c6bddac31bdfd\" returns successfully" Sep 4 17:28:31.713880 kubelet[2809]: W0904 17:28:31.712650 2809 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:31.713880 kubelet[2809]: E0904 17:28:31.712759 2809 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.42:6443: connect: connection refused Sep 4 17:28:34.305657 kubelet[2809]: I0904 17:28:34.305607 2809 apiserver.go:52] "Watching apiserver" Sep 4 17:28:34.325227 kubelet[2809]: I0904 17:28:34.325165 2809 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:28:34.378211 kubelet[2809]: E0904 17:28:34.378179 2809 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3975.2.1-a-1f7e34d344" not found Sep 4 17:28:34.532653 kubelet[2809]: E0904 17:28:34.532612 2809 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.2.1-a-1f7e34d344\" not found" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:34.641797 kubelet[2809]: I0904 17:28:34.641760 2809 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:35.452888 kubelet[2809]: I0904 17:28:35.452613 2809 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:35.470601 kubelet[2809]: W0904 17:28:35.469948 2809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:28:37.065515 systemd[1]: Reloading requested from client PID 3078 ('systemctl') (unit session-9.scope)... Sep 4 17:28:37.065531 systemd[1]: Reloading... Sep 4 17:28:37.159917 zram_generator::config[3115]: No configuration found. Sep 4 17:28:37.270811 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:28:37.363176 systemd[1]: Reloading finished in 297 ms. Sep 4 17:28:37.400017 kubelet[2809]: I0904 17:28:37.399961 2809 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:28:37.400116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:28:37.415797 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:28:37.416063 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:37.422159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:28:37.515190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:37.521597 (kubelet)[3182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:28:37.561643 kubelet[3182]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:28:37.561643 kubelet[3182]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:28:37.561997 kubelet[3182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:28:37.561997 kubelet[3182]: I0904 17:28:37.561743 3182 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:28:37.567966 kubelet[3182]: I0904 17:28:37.567942 3182 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:28:37.567966 kubelet[3182]: I0904 17:28:37.567964 3182 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:28:37.568194 kubelet[3182]: I0904 17:28:37.568173 3182 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:28:37.569506 kubelet[3182]: I0904 17:28:37.569483 3182 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:28:37.570843 kubelet[3182]: I0904 17:28:37.570404 3182 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:28:37.577085 kubelet[3182]: I0904 17:28:37.577071 3182 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:28:37.577284 kubelet[3182]: I0904 17:28:37.577266 3182 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:28:37.577426 kubelet[3182]: I0904 17:28:37.577409 3182 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:28:37.577548 kubelet[3182]: I0904 17:28:37.577432 3182 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:28:37.577548 kubelet[3182]: I0904 17:28:37.577445 3182 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:28:37.577548 kubelet[3182]: I0904 17:28:37.577485 3182 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:28:37.577662 kubelet[3182]: I0904 17:28:37.577577 3182 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:28:37.577662 kubelet[3182]: I0904 17:28:37.577593 3182 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:28:37.577662 kubelet[3182]: I0904 17:28:37.577618 3182 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:28:37.577662 kubelet[3182]: I0904 17:28:37.577634 3182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:28:37.581233 kubelet[3182]: I0904 17:28:37.581122 3182 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:28:37.581640 kubelet[3182]: I0904 17:28:37.581618 3182 server.go:1232] "Started kubelet" Sep 4 17:28:37.584780 kubelet[3182]: I0904 17:28:37.583258 3182 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:28:37.584780 kubelet[3182]: I0904 17:28:37.583503 3182 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:28:37.584780 kubelet[3182]: I0904 17:28:37.583546 3182 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:28:37.584780 kubelet[3182]: I0904 17:28:37.584379 3182 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:28:37.588662 kubelet[3182]: I0904 17:28:37.588646 3182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:28:37.595797 kubelet[3182]: I0904 17:28:37.595779 3182 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:28:37.596110 kubelet[3182]: E0904 17:28:37.596092 3182 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:28:37.596249 kubelet[3182]: E0904 17:28:37.596238 3182 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:28:37.596309 kubelet[3182]: I0904 17:28:37.596288 3182 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:28:37.596430 kubelet[3182]: I0904 17:28:37.596415 3182 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:28:37.611103 kubelet[3182]: I0904 17:28:37.611080 3182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:28:37.614927 kubelet[3182]: I0904 17:28:37.612471 3182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:28:37.614927 kubelet[3182]: I0904 17:28:37.612493 3182 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:28:37.614927 kubelet[3182]: I0904 17:28:37.612510 3182 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:28:37.614927 kubelet[3182]: E0904 17:28:37.612550 3182 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:28:37.683658 kubelet[3182]: I0904 17:28:37.683630 3182 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:28:37.683658 kubelet[3182]: I0904 17:28:37.683657 3182 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:28:37.683827 kubelet[3182]: I0904 17:28:37.683677 3182 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:28:37.683827 kubelet[3182]: I0904 17:28:37.683817 3182 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:28:37.683972 kubelet[3182]: I0904 17:28:37.683840 3182 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:28:37.683972 kubelet[3182]: I0904 17:28:37.683865 3182 policy_none.go:49] "None policy: Start" Sep 4 17:28:37.684511 kubelet[3182]: I0904 17:28:37.684487 3182 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:28:37.684598 kubelet[3182]: I0904 17:28:37.684520 3182 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:28:37.684783 kubelet[3182]: I0904 17:28:37.684762 3182 state_mem.go:75] "Updated machine memory state" Sep 4 17:28:37.688628 kubelet[3182]: I0904 17:28:37.688600 3182 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:28:37.689040 kubelet[3182]: I0904 17:28:37.688823 3182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:28:37.699834 kubelet[3182]: I0904 17:28:37.699820 3182 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.709199 kubelet[3182]: I0904 17:28:37.709168 3182 kubelet_node_status.go:108] "Node was previously registered" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.709467 kubelet[3182]: I0904 17:28:37.709436 3182 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.712793 kubelet[3182]: I0904 17:28:37.712750 3182 topology_manager.go:215] "Topology Admit Handler" podUID="c680416be14766d52137073cd99c48d8" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.713155 kubelet[3182]: I0904 17:28:37.713031 3182 topology_manager.go:215] "Topology Admit Handler" podUID="1c873a763f5cfeb568d03b5d3ffc37cd" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.713155 kubelet[3182]: I0904 17:28:37.713100 3182 topology_manager.go:215] "Topology Admit Handler" podUID="468efe39c038367e55086d192b863d84" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.720658 kubelet[3182]: W0904 17:28:37.720643 3182 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:28:37.722965 kubelet[3182]: W0904 17:28:37.722891 3182 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:28:37.723048 kubelet[3182]: W0904 17:28:37.722986 3182 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:28:37.723195 kubelet[3182]: E0904 17:28:37.723167 3182 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.2.1-a-1f7e34d344\" already exists" pod="kube-system/kube-scheduler-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.897746 kubelet[3182]: I0904 17:28:37.897647 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c680416be14766d52137073cd99c48d8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-a-1f7e34d344\" (UID: \"c680416be14766d52137073cd99c48d8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.897746 kubelet[3182]: I0904 17:28:37.897707 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.898011 kubelet[3182]: I0904 17:28:37.897942 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.898011 kubelet[3182]: I0904 17:28:37.898011 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c680416be14766d52137073cd99c48d8-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-a-1f7e34d344\" (UID: \"c680416be14766d52137073cd99c48d8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.898134 kubelet[3182]: I0904 17:28:37.898047 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c680416be14766d52137073cd99c48d8-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-a-1f7e34d344\" (UID: \"c680416be14766d52137073cd99c48d8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.898134 kubelet[3182]: I0904 17:28:37.898128 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.898233 kubelet[3182]: I0904 17:28:37.898168 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/468efe39c038367e55086d192b863d84-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-a-1f7e34d344\" (UID: \"468efe39c038367e55086d192b863d84\") " pod="kube-system/kube-scheduler-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.898233 kubelet[3182]: I0904 17:28:37.898218 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:37.898331 kubelet[3182]: I0904 17:28:37.898252 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c873a763f5cfeb568d03b5d3ffc37cd-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-1f7e34d344\" (UID: \"1c873a763f5cfeb568d03b5d3ffc37cd\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:38.579449 kubelet[3182]: I0904 17:28:38.579401 3182 apiserver.go:52] "Watching apiserver" Sep 4 17:28:38.597338 kubelet[3182]: I0904 17:28:38.597281 3182 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:28:38.679880 kubelet[3182]: W0904 17:28:38.678773 3182 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:28:38.679880 kubelet[3182]: E0904 17:28:38.678861 3182 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.1-a-1f7e34d344\" already exists" pod="kube-system/kube-apiserver-ci-3975.2.1-a-1f7e34d344" Sep 4 17:28:38.683659 kubelet[3182]: I0904 17:28:38.683633 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.1-a-1f7e34d344" podStartSLOduration=1.683562534 podCreationTimestamp="2024-09-04 17:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:28:38.675180354 +0000 UTC m=+1.149423754" watchObservedRunningTime="2024-09-04 17:28:38.683562534 +0000 UTC m=+1.157805834" Sep 4 17:28:38.691943 kubelet[3182]: I0904 17:28:38.691924 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.1-a-1f7e34d344" podStartSLOduration=3.691892014 podCreationTimestamp="2024-09-04 17:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:28:38.684282941 +0000 UTC m=+1.158526341" watchObservedRunningTime="2024-09-04 17:28:38.691892014 +0000 UTC m=+1.166135414" Sep 4 17:28:38.721280 kubelet[3182]: I0904 17:28:38.721254 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.1-a-1f7e34d344" podStartSLOduration=1.721189896 podCreationTimestamp="2024-09-04 17:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:28:38.692905424 +0000 UTC m=+1.167148824" watchObservedRunningTime="2024-09-04 17:28:38.721189896 +0000 UTC m=+1.195433196" Sep 4 17:28:43.678045 sudo[2330]: pam_unix(sudo:session): session closed for user root Sep 4 17:28:43.893056 sshd[2327]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:43.897070 systemd[1]: sshd@6-10.200.8.42:22-10.200.16.10:51668.service: Deactivated successfully. Sep 4 17:28:43.899190 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:28:43.899439 systemd[1]: session-9.scope: Consumed 3.990s CPU time, 136.8M memory peak, 0B memory swap peak. Sep 4 17:28:43.899979 systemd-logind[1666]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:28:43.901088 systemd-logind[1666]: Removed session 9. Sep 4 17:28:50.204717 kubelet[3182]: I0904 17:28:50.204629 3182 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:28:50.205285 containerd[1690]: time="2024-09-04T17:28:50.205060083Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:28:50.205658 kubelet[3182]: I0904 17:28:50.205439 3182 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:28:50.409685 kubelet[3182]: I0904 17:28:50.409636 3182 topology_manager.go:215] "Topology Admit Handler" podUID="6de11f49-cd49-4f7f-b140-473fdc14d511" podNamespace="kube-system" podName="kube-proxy-8dvgl" Sep 4 17:28:50.423374 systemd[1]: Created slice kubepods-besteffort-pod6de11f49_cd49_4f7f_b140_473fdc14d511.slice - libcontainer container kubepods-besteffort-pod6de11f49_cd49_4f7f_b140_473fdc14d511.slice. Sep 4 17:28:50.477395 kubelet[3182]: I0904 17:28:50.477059 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6de11f49-cd49-4f7f-b140-473fdc14d511-kube-proxy\") pod \"kube-proxy-8dvgl\" (UID: \"6de11f49-cd49-4f7f-b140-473fdc14d511\") " pod="kube-system/kube-proxy-8dvgl" Sep 4 17:28:50.477395 kubelet[3182]: I0904 17:28:50.477179 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4zh\" (UniqueName: \"kubernetes.io/projected/6de11f49-cd49-4f7f-b140-473fdc14d511-kube-api-access-fc4zh\") pod \"kube-proxy-8dvgl\" (UID: \"6de11f49-cd49-4f7f-b140-473fdc14d511\") " pod="kube-system/kube-proxy-8dvgl" Sep 4 17:28:50.477395 kubelet[3182]: I0904 17:28:50.477245 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6de11f49-cd49-4f7f-b140-473fdc14d511-lib-modules\") pod \"kube-proxy-8dvgl\" (UID: \"6de11f49-cd49-4f7f-b140-473fdc14d511\") " pod="kube-system/kube-proxy-8dvgl" Sep 4 17:28:50.477395 kubelet[3182]: I0904 17:28:50.477277 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6de11f49-cd49-4f7f-b140-473fdc14d511-xtables-lock\") pod \"kube-proxy-8dvgl\" (UID: \"6de11f49-cd49-4f7f-b140-473fdc14d511\") " pod="kube-system/kube-proxy-8dvgl" Sep 4 17:28:50.584174 kubelet[3182]: E0904 17:28:50.584141 3182 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 17:28:50.584174 kubelet[3182]: E0904 17:28:50.584177 3182 projected.go:198] Error preparing data for projected volume kube-api-access-fc4zh for pod kube-system/kube-proxy-8dvgl: configmap "kube-root-ca.crt" not found Sep 4 17:28:50.584371 kubelet[3182]: E0904 17:28:50.584268 3182 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6de11f49-cd49-4f7f-b140-473fdc14d511-kube-api-access-fc4zh podName:6de11f49-cd49-4f7f-b140-473fdc14d511 nodeName:}" failed. No retries permitted until 2024-09-04 17:28:51.084229968 +0000 UTC m=+13.558473368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fc4zh" (UniqueName: "kubernetes.io/projected/6de11f49-cd49-4f7f-b140-473fdc14d511-kube-api-access-fc4zh") pod "kube-proxy-8dvgl" (UID: "6de11f49-cd49-4f7f-b140-473fdc14d511") : configmap "kube-root-ca.crt" not found Sep 4 17:28:51.177738 kubelet[3182]: I0904 17:28:51.177692 3182 topology_manager.go:215] "Topology Admit Handler" podUID="713a063e-9cb7-4ed0-8330-e5c452403a96" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-5w57n" Sep 4 17:28:51.202512 systemd[1]: Created slice kubepods-besteffort-pod713a063e_9cb7_4ed0_8330_e5c452403a96.slice - libcontainer container kubepods-besteffort-pod713a063e_9cb7_4ed0_8330_e5c452403a96.slice. Sep 4 17:28:51.283255 kubelet[3182]: I0904 17:28:51.283123 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/713a063e-9cb7-4ed0-8330-e5c452403a96-var-lib-calico\") pod \"tigera-operator-5d56685c77-5w57n\" (UID: \"713a063e-9cb7-4ed0-8330-e5c452403a96\") " pod="tigera-operator/tigera-operator-5d56685c77-5w57n" Sep 4 17:28:51.283255 kubelet[3182]: I0904 17:28:51.283237 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9v2h\" (UniqueName: \"kubernetes.io/projected/713a063e-9cb7-4ed0-8330-e5c452403a96-kube-api-access-x9v2h\") pod \"tigera-operator-5d56685c77-5w57n\" (UID: \"713a063e-9cb7-4ed0-8330-e5c452403a96\") " pod="tigera-operator/tigera-operator-5d56685c77-5w57n" Sep 4 17:28:51.332770 containerd[1690]: time="2024-09-04T17:28:51.332717469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8dvgl,Uid:6de11f49-cd49-4f7f-b140-473fdc14d511,Namespace:kube-system,Attempt:0,}" Sep 4 17:28:51.374761 containerd[1690]: time="2024-09-04T17:28:51.374638765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:51.374761 containerd[1690]: time="2024-09-04T17:28:51.374701566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:51.374761 containerd[1690]: time="2024-09-04T17:28:51.374719966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:51.374761 containerd[1690]: time="2024-09-04T17:28:51.374734066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:51.397467 systemd[1]: run-containerd-runc-k8s.io-d4b4c24073da8e58d1be21657bbb8191030ae203fef04909ebcf0bce1013976d-runc.B5Jt0C.mount: Deactivated successfully. Sep 4 17:28:51.411165 systemd[1]: Started cri-containerd-d4b4c24073da8e58d1be21657bbb8191030ae203fef04909ebcf0bce1013976d.scope - libcontainer container d4b4c24073da8e58d1be21657bbb8191030ae203fef04909ebcf0bce1013976d. Sep 4 17:28:51.430524 containerd[1690]: time="2024-09-04T17:28:51.430407560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8dvgl,Uid:6de11f49-cd49-4f7f-b140-473fdc14d511,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4b4c24073da8e58d1be21657bbb8191030ae203fef04909ebcf0bce1013976d\"" Sep 4 17:28:51.433997 containerd[1690]: time="2024-09-04T17:28:51.433917185Z" level=info msg="CreateContainer within sandbox \"d4b4c24073da8e58d1be21657bbb8191030ae203fef04909ebcf0bce1013976d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:28:51.467835 containerd[1690]: time="2024-09-04T17:28:51.467798825Z" level=info msg="CreateContainer within sandbox \"d4b4c24073da8e58d1be21657bbb8191030ae203fef04909ebcf0bce1013976d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73716dd0fc252bc84400e6898494fe6b210b3593debf91c7994c19ee772bbf4a\"" Sep 4 17:28:51.468395 containerd[1690]: time="2024-09-04T17:28:51.468373029Z" level=info msg="StartContainer for \"73716dd0fc252bc84400e6898494fe6b210b3593debf91c7994c19ee772bbf4a\"" Sep 4 17:28:51.499037 systemd[1]: Started cri-containerd-73716dd0fc252bc84400e6898494fe6b210b3593debf91c7994c19ee772bbf4a.scope - libcontainer container 73716dd0fc252bc84400e6898494fe6b210b3593debf91c7994c19ee772bbf4a. Sep 4 17:28:51.506995 containerd[1690]: time="2024-09-04T17:28:51.506476199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-5w57n,Uid:713a063e-9cb7-4ed0-8330-e5c452403a96,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:28:51.528008 containerd[1690]: time="2024-09-04T17:28:51.527973551Z" level=info msg="StartContainer for \"73716dd0fc252bc84400e6898494fe6b210b3593debf91c7994c19ee772bbf4a\" returns successfully" Sep 4 17:28:51.569388 containerd[1690]: time="2024-09-04T17:28:51.568648639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:51.569388 containerd[1690]: time="2024-09-04T17:28:51.568711740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:51.569388 containerd[1690]: time="2024-09-04T17:28:51.568736540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:51.569388 containerd[1690]: time="2024-09-04T17:28:51.568754440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:51.591048 systemd[1]: Started cri-containerd-c9e93b5ec6a9445ae5416d7f1f1a424ed24cb0f803531ebf0444332a75529f49.scope - libcontainer container c9e93b5ec6a9445ae5416d7f1f1a424ed24cb0f803531ebf0444332a75529f49. Sep 4 17:28:51.722354 containerd[1690]: time="2024-09-04T17:28:51.722130726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-5w57n,Uid:713a063e-9cb7-4ed0-8330-e5c452403a96,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c9e93b5ec6a9445ae5416d7f1f1a424ed24cb0f803531ebf0444332a75529f49\"" Sep 4 17:28:51.724205 containerd[1690]: time="2024-09-04T17:28:51.723961039Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:28:53.418416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093045096.mount: Deactivated successfully. Sep 4 17:28:53.963543 containerd[1690]: time="2024-09-04T17:28:53.963442597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:53.965784 containerd[1690]: time="2024-09-04T17:28:53.965723713Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136549" Sep 4 17:28:53.969729 containerd[1690]: time="2024-09-04T17:28:53.969611441Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:53.973956 containerd[1690]: time="2024-09-04T17:28:53.973903271Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:53.975166 containerd[1690]: time="2024-09-04T17:28:53.974584876Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.250586537s" Sep 4 17:28:53.975166 containerd[1690]: time="2024-09-04T17:28:53.974620176Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:28:53.976448 containerd[1690]: time="2024-09-04T17:28:53.976410889Z" level=info msg="CreateContainer within sandbox \"c9e93b5ec6a9445ae5416d7f1f1a424ed24cb0f803531ebf0444332a75529f49\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:28:54.006260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798764914.mount: Deactivated successfully. Sep 4 17:28:54.014219 containerd[1690]: time="2024-09-04T17:28:54.014122256Z" level=info msg="CreateContainer within sandbox \"c9e93b5ec6a9445ae5416d7f1f1a424ed24cb0f803531ebf0444332a75529f49\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"842abc08b3e6d03237da9e4d0df7c161195d8b79a61720f25c0d7d48abae1c6f\"" Sep 4 17:28:54.015815 containerd[1690]: time="2024-09-04T17:28:54.014958662Z" level=info msg="StartContainer for \"842abc08b3e6d03237da9e4d0df7c161195d8b79a61720f25c0d7d48abae1c6f\"" Sep 4 17:28:54.045986 systemd[1]: Started cri-containerd-842abc08b3e6d03237da9e4d0df7c161195d8b79a61720f25c0d7d48abae1c6f.scope - libcontainer container 842abc08b3e6d03237da9e4d0df7c161195d8b79a61720f25c0d7d48abae1c6f. Sep 4 17:28:54.074292 containerd[1690]: time="2024-09-04T17:28:54.074252482Z" level=info msg="StartContainer for \"842abc08b3e6d03237da9e4d0df7c161195d8b79a61720f25c0d7d48abae1c6f\" returns successfully" Sep 4 17:28:54.699972 kubelet[3182]: I0904 17:28:54.699896 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8dvgl" podStartSLOduration=4.699813612 podCreationTimestamp="2024-09-04 17:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:28:51.697554052 +0000 UTC m=+14.171797352" watchObservedRunningTime="2024-09-04 17:28:54.699813612 +0000 UTC m=+17.174056912" Sep 4 17:28:54.700489 kubelet[3182]: I0904 17:28:54.700039 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-5w57n" podStartSLOduration=1.448698471 podCreationTimestamp="2024-09-04 17:28:51 +0000 UTC" firstStartedPulling="2024-09-04 17:28:51.723556136 +0000 UTC m=+14.197799436" lastFinishedPulling="2024-09-04 17:28:53.974868278 +0000 UTC m=+16.449111578" observedRunningTime="2024-09-04 17:28:54.69961881 +0000 UTC m=+17.173862210" watchObservedRunningTime="2024-09-04 17:28:54.700010613 +0000 UTC m=+17.174253913" Sep 4 17:28:56.982004 kubelet[3182]: I0904 17:28:56.981963 3182 topology_manager.go:215] "Topology Admit Handler" podUID="6189ca44-3e48-4bda-a06b-ef2d1d913298" podNamespace="calico-system" podName="calico-typha-6d676b9b46-4lbq8" Sep 4 17:28:56.996964 systemd[1]: Created slice kubepods-besteffort-pod6189ca44_3e48_4bda_a06b_ef2d1d913298.slice - libcontainer container kubepods-besteffort-pod6189ca44_3e48_4bda_a06b_ef2d1d913298.slice. Sep 4 17:28:57.023882 kubelet[3182]: I0904 17:28:57.022458 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6189ca44-3e48-4bda-a06b-ef2d1d913298-tigera-ca-bundle\") pod \"calico-typha-6d676b9b46-4lbq8\" (UID: \"6189ca44-3e48-4bda-a06b-ef2d1d913298\") " pod="calico-system/calico-typha-6d676b9b46-4lbq8" Sep 4 17:28:57.023882 kubelet[3182]: I0904 17:28:57.022505 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nnsc\" (UniqueName: \"kubernetes.io/projected/6189ca44-3e48-4bda-a06b-ef2d1d913298-kube-api-access-2nnsc\") pod \"calico-typha-6d676b9b46-4lbq8\" (UID: \"6189ca44-3e48-4bda-a06b-ef2d1d913298\") " pod="calico-system/calico-typha-6d676b9b46-4lbq8" Sep 4 17:28:57.023882 kubelet[3182]: I0904 17:28:57.022535 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6189ca44-3e48-4bda-a06b-ef2d1d913298-typha-certs\") pod \"calico-typha-6d676b9b46-4lbq8\" (UID: \"6189ca44-3e48-4bda-a06b-ef2d1d913298\") " pod="calico-system/calico-typha-6d676b9b46-4lbq8" Sep 4 17:28:57.054523 kubelet[3182]: I0904 17:28:57.053886 3182 topology_manager.go:215] "Topology Admit Handler" podUID="a1f0aaaa-b130-4507-821f-e47edfb54981" podNamespace="calico-system" podName="calico-node-ph7q9" Sep 4 17:28:57.063042 systemd[1]: Created slice kubepods-besteffort-poda1f0aaaa_b130_4507_821f_e47edfb54981.slice - libcontainer container kubepods-besteffort-poda1f0aaaa_b130_4507_821f_e47edfb54981.slice. Sep 4 17:28:57.122830 kubelet[3182]: I0904 17:28:57.122791 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a1f0aaaa-b130-4507-821f-e47edfb54981-node-certs\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.123730 kubelet[3182]: I0904 17:28:57.123098 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-cni-log-dir\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.123730 kubelet[3182]: I0904 17:28:57.123145 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-xtables-lock\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.123730 kubelet[3182]: I0904 17:28:57.123185 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-cni-net-dir\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.123730 kubelet[3182]: I0904 17:28:57.123221 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-lib-modules\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.123730 kubelet[3182]: I0904 17:28:57.123254 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-var-run-calico\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.124051 kubelet[3182]: I0904 17:28:57.123290 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-flexvol-driver-host\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.124051 kubelet[3182]: I0904 17:28:57.123342 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f0aaaa-b130-4507-821f-e47edfb54981-tigera-ca-bundle\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.124051 kubelet[3182]: I0904 17:28:57.123377 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-cni-bin-dir\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.124051 kubelet[3182]: I0904 17:28:57.123414 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb869\" (UniqueName: \"kubernetes.io/projected/a1f0aaaa-b130-4507-821f-e47edfb54981-kube-api-access-xb869\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.124051 kubelet[3182]: I0904 17:28:57.123464 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-policysync\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.124319 kubelet[3182]: I0904 17:28:57.123501 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a1f0aaaa-b130-4507-821f-e47edfb54981-var-lib-calico\") pod \"calico-node-ph7q9\" (UID: \"a1f0aaaa-b130-4507-821f-e47edfb54981\") " pod="calico-system/calico-node-ph7q9" Sep 4 17:28:57.190921 kubelet[3182]: I0904 17:28:57.190213 3182 topology_manager.go:215] "Topology Admit Handler" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" podNamespace="calico-system" podName="csi-node-driver-b62lq" Sep 4 17:28:57.190921 kubelet[3182]: E0904 17:28:57.190551 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b62lq" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" Sep 4 17:28:57.225039 kubelet[3182]: I0904 17:28:57.224696 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae313f17-0269-49cd-93a7-cf8ff23b72b7-varrun\") pod \"csi-node-driver-b62lq\" (UID: \"ae313f17-0269-49cd-93a7-cf8ff23b72b7\") " pod="calico-system/csi-node-driver-b62lq" Sep 4 17:28:57.225039 kubelet[3182]: I0904 17:28:57.224738 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae313f17-0269-49cd-93a7-cf8ff23b72b7-socket-dir\") pod \"csi-node-driver-b62lq\" (UID: \"ae313f17-0269-49cd-93a7-cf8ff23b72b7\") " pod="calico-system/csi-node-driver-b62lq" Sep 4 17:28:57.225039 kubelet[3182]: I0904 17:28:57.224779 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hkp\" (UniqueName: \"kubernetes.io/projected/ae313f17-0269-49cd-93a7-cf8ff23b72b7-kube-api-access-54hkp\") pod \"csi-node-driver-b62lq\" (UID: \"ae313f17-0269-49cd-93a7-cf8ff23b72b7\") " pod="calico-system/csi-node-driver-b62lq" Sep 4 17:28:57.225039 kubelet[3182]: I0904 17:28:57.224819 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae313f17-0269-49cd-93a7-cf8ff23b72b7-registration-dir\") pod \"csi-node-driver-b62lq\" (UID: \"ae313f17-0269-49cd-93a7-cf8ff23b72b7\") " pod="calico-system/csi-node-driver-b62lq" Sep 4 17:28:57.225039 kubelet[3182]: I0904 17:28:57.224883 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae313f17-0269-49cd-93a7-cf8ff23b72b7-kubelet-dir\") pod \"csi-node-driver-b62lq\" (UID: \"ae313f17-0269-49cd-93a7-cf8ff23b72b7\") " pod="calico-system/csi-node-driver-b62lq" Sep 4 17:28:57.230350 kubelet[3182]: E0904 17:28:57.230069 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.230350 kubelet[3182]: W0904 17:28:57.230086 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.230350 kubelet[3182]: E0904 17:28:57.230121 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.230532 kubelet[3182]: E0904 17:28:57.230380 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.230532 kubelet[3182]: W0904 17:28:57.230392 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.230532 kubelet[3182]: E0904 17:28:57.230471 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.231539 kubelet[3182]: E0904 17:28:57.230881 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.231539 kubelet[3182]: W0904 17:28:57.230894 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.231539 kubelet[3182]: E0904 17:28:57.231027 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.232354 kubelet[3182]: E0904 17:28:57.231946 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.232354 kubelet[3182]: W0904 17:28:57.231960 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.232354 kubelet[3182]: E0904 17:28:57.232193 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.235877 kubelet[3182]: E0904 17:28:57.235170 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.235994 kubelet[3182]: W0904 17:28:57.235978 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.236064 kubelet[3182]: E0904 17:28:57.236055 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.236386 kubelet[3182]: E0904 17:28:57.236314 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.237114 kubelet[3182]: W0904 17:28:57.236592 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.237114 kubelet[3182]: E0904 17:28:57.236616 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.238016 kubelet[3182]: E0904 17:28:57.237985 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.238016 kubelet[3182]: W0904 17:28:57.237999 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.239935 kubelet[3182]: E0904 17:28:57.238285 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.240181 kubelet[3182]: E0904 17:28:57.240145 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.240181 kubelet[3182]: W0904 17:28:57.240158 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.240436 kubelet[3182]: E0904 17:28:57.240423 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.240684 kubelet[3182]: E0904 17:28:57.240632 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.240684 kubelet[3182]: W0904 17:28:57.240645 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.240684 kubelet[3182]: E0904 17:28:57.240661 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.270432 kubelet[3182]: E0904 17:28:57.270417 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.270580 kubelet[3182]: W0904 17:28:57.270529 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.270580 kubelet[3182]: E0904 17:28:57.270553 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.318654 containerd[1690]: time="2024-09-04T17:28:57.318185718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d676b9b46-4lbq8,Uid:6189ca44-3e48-4bda-a06b-ef2d1d913298,Namespace:calico-system,Attempt:0,}" Sep 4 17:28:57.326611 kubelet[3182]: E0904 17:28:57.326444 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.326611 kubelet[3182]: W0904 17:28:57.326462 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.326611 kubelet[3182]: E0904 17:28:57.326485 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.327011 kubelet[3182]: E0904 17:28:57.326889 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.327011 kubelet[3182]: W0904 17:28:57.326903 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.327011 kubelet[3182]: E0904 17:28:57.326928 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.327901 kubelet[3182]: E0904 17:28:57.327885 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.328096 kubelet[3182]: W0904 17:28:57.327993 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.328475 kubelet[3182]: E0904 17:28:57.328176 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.328475 kubelet[3182]: E0904 17:28:57.328464 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.328475 kubelet[3182]: W0904 17:28:57.328476 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.328994 kubelet[3182]: E0904 17:28:57.328729 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.328994 kubelet[3182]: E0904 17:28:57.328731 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.328994 kubelet[3182]: W0904 17:28:57.328768 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.328994 kubelet[3182]: E0904 17:28:57.328782 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.329576 kubelet[3182]: E0904 17:28:57.329341 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.329576 kubelet[3182]: W0904 17:28:57.329355 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.329576 kubelet[3182]: E0904 17:28:57.329387 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.330070 kubelet[3182]: E0904 17:28:57.329669 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.330070 kubelet[3182]: W0904 17:28:57.329681 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.330070 kubelet[3182]: E0904 17:28:57.329708 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.330206 kubelet[3182]: E0904 17:28:57.330092 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.330206 kubelet[3182]: W0904 17:28:57.330104 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.330206 kubelet[3182]: E0904 17:28:57.330128 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.330748 kubelet[3182]: E0904 17:28:57.330418 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.330748 kubelet[3182]: W0904 17:28:57.330432 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.330748 kubelet[3182]: E0904 17:28:57.330518 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.330748 kubelet[3182]: E0904 17:28:57.330739 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.330748 kubelet[3182]: W0904 17:28:57.330750 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.331003 kubelet[3182]: E0904 17:28:57.330844 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.331053 kubelet[3182]: E0904 17:28:57.331007 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.331053 kubelet[3182]: W0904 17:28:57.331017 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.331600 kubelet[3182]: E0904 17:28:57.331106 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.331600 kubelet[3182]: E0904 17:28:57.331575 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.331600 kubelet[3182]: W0904 17:28:57.331588 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.331959 kubelet[3182]: E0904 17:28:57.331725 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.332188 kubelet[3182]: E0904 17:28:57.332136 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.332188 kubelet[3182]: W0904 17:28:57.332151 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.332188 kubelet[3182]: E0904 17:28:57.332186 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.332895 kubelet[3182]: E0904 17:28:57.332755 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.332895 kubelet[3182]: W0904 17:28:57.332770 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.332895 kubelet[3182]: E0904 17:28:57.332807 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.333236 kubelet[3182]: E0904 17:28:57.333217 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.333306 kubelet[3182]: W0904 17:28:57.333234 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.333354 kubelet[3182]: E0904 17:28:57.333334 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.333834 kubelet[3182]: E0904 17:28:57.333539 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.333834 kubelet[3182]: W0904 17:28:57.333553 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.333834 kubelet[3182]: E0904 17:28:57.333647 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.334028 kubelet[3182]: E0904 17:28:57.333956 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.334028 kubelet[3182]: W0904 17:28:57.333967 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.334028 kubelet[3182]: E0904 17:28:57.334017 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.334330 kubelet[3182]: E0904 17:28:57.334276 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.334330 kubelet[3182]: W0904 17:28:57.334288 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.334432 kubelet[3182]: E0904 17:28:57.334400 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.335075 kubelet[3182]: E0904 17:28:57.335036 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.335075 kubelet[3182]: W0904 17:28:57.335065 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.336683 kubelet[3182]: E0904 17:28:57.335086 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.336683 kubelet[3182]: E0904 17:28:57.335434 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.336683 kubelet[3182]: W0904 17:28:57.335446 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.336683 kubelet[3182]: E0904 17:28:57.335534 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.336683 kubelet[3182]: E0904 17:28:57.335721 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.336683 kubelet[3182]: W0904 17:28:57.335732 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.336683 kubelet[3182]: E0904 17:28:57.335819 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.336683 kubelet[3182]: E0904 17:28:57.335985 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.336683 kubelet[3182]: W0904 17:28:57.335995 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.336683 kubelet[3182]: E0904 17:28:57.336079 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.337691 kubelet[3182]: E0904 17:28:57.336222 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.337691 kubelet[3182]: W0904 17:28:57.336231 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.337691 kubelet[3182]: E0904 17:28:57.336261 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.337691 kubelet[3182]: E0904 17:28:57.336509 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.337691 kubelet[3182]: W0904 17:28:57.336521 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.337691 kubelet[3182]: E0904 17:28:57.336549 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.337691 kubelet[3182]: E0904 17:28:57.336888 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.337691 kubelet[3182]: W0904 17:28:57.336900 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.337691 kubelet[3182]: E0904 17:28:57.336916 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.352360 kubelet[3182]: E0904 17:28:57.351505 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:28:57.352360 kubelet[3182]: W0904 17:28:57.351519 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:28:57.352360 kubelet[3182]: E0904 17:28:57.351537 3182 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:28:57.369746 containerd[1690]: time="2024-09-04T17:28:57.369393001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ph7q9,Uid:a1f0aaaa-b130-4507-821f-e47edfb54981,Namespace:calico-system,Attempt:0,}" Sep 4 17:28:57.371499 containerd[1690]: time="2024-09-04T17:28:57.371263919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:57.371499 containerd[1690]: time="2024-09-04T17:28:57.371333920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:57.371499 containerd[1690]: time="2024-09-04T17:28:57.371359120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:57.371499 containerd[1690]: time="2024-09-04T17:28:57.371375920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:57.398030 systemd[1]: Started cri-containerd-318213ed0838068c29db82f390b53c52542202dd13efeda3722b8d2ad0382935.scope - libcontainer container 318213ed0838068c29db82f390b53c52542202dd13efeda3722b8d2ad0382935. Sep 4 17:28:57.428002 containerd[1690]: time="2024-09-04T17:28:57.427611151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:57.429620 containerd[1690]: time="2024-09-04T17:28:57.429523869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:57.429722 containerd[1690]: time="2024-09-04T17:28:57.429635770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:57.429840 containerd[1690]: time="2024-09-04T17:28:57.429750671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:57.456261 systemd[1]: Started cri-containerd-d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e.scope - libcontainer container d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e. Sep 4 17:28:57.511463 containerd[1690]: time="2024-09-04T17:28:57.509584825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ph7q9,Uid:a1f0aaaa-b130-4507-821f-e47edfb54981,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e\"" Sep 4 17:28:57.517935 containerd[1690]: time="2024-09-04T17:28:57.517498600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d676b9b46-4lbq8,Uid:6189ca44-3e48-4bda-a06b-ef2d1d913298,Namespace:calico-system,Attempt:0,} returns sandbox id \"318213ed0838068c29db82f390b53c52542202dd13efeda3722b8d2ad0382935\"" Sep 4 17:28:57.518181 containerd[1690]: time="2024-09-04T17:28:57.518086605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:28:58.613476 kubelet[3182]: E0904 17:28:58.613417 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b62lq" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" Sep 4 17:28:58.927540 containerd[1690]: time="2024-09-04T17:28:58.927439042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:58.930345 containerd[1690]: time="2024-09-04T17:28:58.930221160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:28:58.934840 containerd[1690]: time="2024-09-04T17:28:58.934659690Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:58.939559 containerd[1690]: time="2024-09-04T17:28:58.939500622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:58.940253 containerd[1690]: time="2024-09-04T17:28:58.940207426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.42208772s" Sep 4 17:28:58.940391 containerd[1690]: time="2024-09-04T17:28:58.940255027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:28:58.941211 containerd[1690]: time="2024-09-04T17:28:58.941125732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:28:58.942571 containerd[1690]: time="2024-09-04T17:28:58.942537942Z" level=info msg="CreateContainer within sandbox \"d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:28:58.996324 containerd[1690]: time="2024-09-04T17:28:58.996287297Z" level=info msg="CreateContainer within sandbox \"d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5\"" Sep 4 17:28:58.997055 containerd[1690]: time="2024-09-04T17:28:58.996836501Z" level=info msg="StartContainer for \"54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5\"" Sep 4 17:28:59.036021 systemd[1]: Started cri-containerd-54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5.scope - libcontainer container 54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5. Sep 4 17:28:59.082989 containerd[1690]: time="2024-09-04T17:28:59.082945370Z" level=info msg="StartContainer for \"54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5\" returns successfully" Sep 4 17:28:59.098927 systemd[1]: cri-containerd-54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5.scope: Deactivated successfully. Sep 4 17:28:59.121786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5-rootfs.mount: Deactivated successfully. Sep 4 17:28:59.864373 containerd[1690]: time="2024-09-04T17:28:59.864110933Z" level=info msg="shim disconnected" id=54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5 namespace=k8s.io Sep 4 17:28:59.864373 containerd[1690]: time="2024-09-04T17:28:59.864188134Z" level=warning msg="cleaning up after shim disconnected" id=54f804690d046a8ce7998cc66cba1069f9879ea97003fc7c5316624089b7e2f5 namespace=k8s.io Sep 4 17:28:59.864373 containerd[1690]: time="2024-09-04T17:28:59.864203034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:00.614087 kubelet[3182]: E0904 17:29:00.613614 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b62lq" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" Sep 4 17:29:02.311527 containerd[1690]: time="2024-09-04T17:29:02.311432410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:02.314051 containerd[1690]: time="2024-09-04T17:29:02.313816326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:29:02.318265 containerd[1690]: time="2024-09-04T17:29:02.318232455Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:02.322827 containerd[1690]: time="2024-09-04T17:29:02.322680984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:02.324104 containerd[1690]: time="2024-09-04T17:29:02.323873292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.38269406s" Sep 4 17:29:02.324104 containerd[1690]: time="2024-09-04T17:29:02.323910992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:29:02.325164 containerd[1690]: time="2024-09-04T17:29:02.325132800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:29:02.340109 containerd[1690]: time="2024-09-04T17:29:02.340080299Z" level=info msg="CreateContainer within sandbox \"318213ed0838068c29db82f390b53c52542202dd13efeda3722b8d2ad0382935\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:29:02.382697 containerd[1690]: time="2024-09-04T17:29:02.382672081Z" level=info msg="CreateContainer within sandbox \"318213ed0838068c29db82f390b53c52542202dd13efeda3722b8d2ad0382935\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dc0f26e68676cb94606ad7356d49d5d67a06646dee50fe2bfc18a011fb122694\"" Sep 4 17:29:02.383099 containerd[1690]: time="2024-09-04T17:29:02.383075183Z" level=info msg="StartContainer for \"dc0f26e68676cb94606ad7356d49d5d67a06646dee50fe2bfc18a011fb122694\"" Sep 4 17:29:02.413192 systemd[1]: Started cri-containerd-dc0f26e68676cb94606ad7356d49d5d67a06646dee50fe2bfc18a011fb122694.scope - libcontainer container dc0f26e68676cb94606ad7356d49d5d67a06646dee50fe2bfc18a011fb122694. Sep 4 17:29:02.466136 containerd[1690]: time="2024-09-04T17:29:02.465622129Z" level=info msg="StartContainer for \"dc0f26e68676cb94606ad7356d49d5d67a06646dee50fe2bfc18a011fb122694\" returns successfully" Sep 4 17:29:02.614131 kubelet[3182]: E0904 17:29:02.613501 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b62lq" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" Sep 4 17:29:03.715393 kubelet[3182]: I0904 17:29:03.715367 3182 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:29:04.613768 kubelet[3182]: E0904 17:29:04.613469 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b62lq" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" Sep 4 17:29:06.128290 containerd[1690]: time="2024-09-04T17:29:06.128233981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:06.130679 containerd[1690]: time="2024-09-04T17:29:06.130633272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:29:06.134103 containerd[1690]: time="2024-09-04T17:29:06.134018359Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:06.140135 containerd[1690]: time="2024-09-04T17:29:06.140071536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:06.140905 containerd[1690]: time="2024-09-04T17:29:06.140771433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.815598432s" Sep 4 17:29:06.140905 containerd[1690]: time="2024-09-04T17:29:06.140806333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:29:06.143165 containerd[1690]: time="2024-09-04T17:29:06.143114124Z" level=info msg="CreateContainer within sandbox \"d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:29:06.188461 containerd[1690]: time="2024-09-04T17:29:06.188428052Z" level=info msg="CreateContainer within sandbox \"d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990\"" Sep 4 17:29:06.189786 containerd[1690]: time="2024-09-04T17:29:06.188896850Z" level=info msg="StartContainer for \"f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990\"" Sep 4 17:29:06.222320 systemd[1]: Started cri-containerd-f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990.scope - libcontainer container f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990. Sep 4 17:29:06.262440 containerd[1690]: time="2024-09-04T17:29:06.262395671Z" level=info msg="StartContainer for \"f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990\" returns successfully" Sep 4 17:29:06.613910 kubelet[3182]: E0904 17:29:06.613874 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b62lq" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" Sep 4 17:29:06.733966 kubelet[3182]: I0904 17:29:06.733323 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6d676b9b46-4lbq8" podStartSLOduration=5.928976609 podCreationTimestamp="2024-09-04 17:28:56 +0000 UTC" firstStartedPulling="2024-09-04 17:28:57.520097324 +0000 UTC m=+19.994340624" lastFinishedPulling="2024-09-04 17:29:02.324399396 +0000 UTC m=+24.798642696" observedRunningTime="2024-09-04 17:29:02.722470727 +0000 UTC m=+25.196714027" watchObservedRunningTime="2024-09-04 17:29:06.733278681 +0000 UTC m=+29.207521981" Sep 4 17:29:07.632557 containerd[1690]: time="2024-09-04T17:29:07.632499296Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:29:07.635336 systemd[1]: cri-containerd-f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990.scope: Deactivated successfully. Sep 4 17:29:07.657416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990-rootfs.mount: Deactivated successfully. Sep 4 17:29:07.690201 kubelet[3182]: I0904 17:29:07.689141 3182 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:29:08.141079 kubelet[3182]: I0904 17:29:07.709969 3182 topology_manager.go:215] "Topology Admit Handler" podUID="4c2647e3-902b-4afd-82a0-7d9247354ab6" podNamespace="kube-system" podName="coredns-5dd5756b68-dhgqf" Sep 4 17:29:08.141079 kubelet[3182]: I0904 17:29:07.722434 3182 topology_manager.go:215] "Topology Admit Handler" podUID="40c76adf-c41f-4f0d-a3b3-98fb776074be" podNamespace="kube-system" podName="coredns-5dd5756b68-fnc7v" Sep 4 17:29:08.141079 kubelet[3182]: I0904 17:29:07.726402 3182 topology_manager.go:215] "Topology Admit Handler" podUID="5c37808a-484d-4381-8069-9b46cdacb5ee" podNamespace="calico-system" podName="calico-kube-controllers-c4b665f85-mmtlg" Sep 4 17:29:08.141079 kubelet[3182]: I0904 17:29:07.803081 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxxnk\" (UniqueName: \"kubernetes.io/projected/5c37808a-484d-4381-8069-9b46cdacb5ee-kube-api-access-sxxnk\") pod \"calico-kube-controllers-c4b665f85-mmtlg\" (UID: \"5c37808a-484d-4381-8069-9b46cdacb5ee\") " pod="calico-system/calico-kube-controllers-c4b665f85-mmtlg" Sep 4 17:29:08.141079 kubelet[3182]: I0904 17:29:07.803178 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c37808a-484d-4381-8069-9b46cdacb5ee-tigera-ca-bundle\") pod \"calico-kube-controllers-c4b665f85-mmtlg\" (UID: \"5c37808a-484d-4381-8069-9b46cdacb5ee\") " pod="calico-system/calico-kube-controllers-c4b665f85-mmtlg" Sep 4 17:29:08.141079 kubelet[3182]: I0904 17:29:07.803226 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnrgz\" (UniqueName: \"kubernetes.io/projected/40c76adf-c41f-4f0d-a3b3-98fb776074be-kube-api-access-fnrgz\") pod \"coredns-5dd5756b68-fnc7v\" (UID: \"40c76adf-c41f-4f0d-a3b3-98fb776074be\") " pod="kube-system/coredns-5dd5756b68-fnc7v" Sep 4 17:29:07.717987 systemd[1]: Created slice kubepods-burstable-pod4c2647e3_902b_4afd_82a0_7d9247354ab6.slice - libcontainer container kubepods-burstable-pod4c2647e3_902b_4afd_82a0_7d9247354ab6.slice. Sep 4 17:29:08.141636 kubelet[3182]: I0904 17:29:07.803276 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c2647e3-902b-4afd-82a0-7d9247354ab6-config-volume\") pod \"coredns-5dd5756b68-dhgqf\" (UID: \"4c2647e3-902b-4afd-82a0-7d9247354ab6\") " pod="kube-system/coredns-5dd5756b68-dhgqf" Sep 4 17:29:08.141636 kubelet[3182]: I0904 17:29:07.803310 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rgt\" (UniqueName: \"kubernetes.io/projected/4c2647e3-902b-4afd-82a0-7d9247354ab6-kube-api-access-x9rgt\") pod \"coredns-5dd5756b68-dhgqf\" (UID: \"4c2647e3-902b-4afd-82a0-7d9247354ab6\") " pod="kube-system/coredns-5dd5756b68-dhgqf" Sep 4 17:29:08.141636 kubelet[3182]: I0904 17:29:07.803342 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c76adf-c41f-4f0d-a3b3-98fb776074be-config-volume\") pod \"coredns-5dd5756b68-fnc7v\" (UID: \"40c76adf-c41f-4f0d-a3b3-98fb776074be\") " pod="kube-system/coredns-5dd5756b68-fnc7v" Sep 4 17:29:07.734353 systemd[1]: Created slice kubepods-burstable-pod40c76adf_c41f_4f0d_a3b3_98fb776074be.slice - libcontainer container kubepods-burstable-pod40c76adf_c41f_4f0d_a3b3_98fb776074be.slice. Sep 4 17:29:07.739803 systemd[1]: Created slice kubepods-besteffort-pod5c37808a_484d_4381_8069_9b46cdacb5ee.slice - libcontainer container kubepods-besteffort-pod5c37808a_484d_4381_8069_9b46cdacb5ee.slice. Sep 4 17:29:08.444882 containerd[1690]: time="2024-09-04T17:29:08.444304027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4b665f85-mmtlg,Uid:5c37808a-484d-4381-8069-9b46cdacb5ee,Namespace:calico-system,Attempt:0,}" Sep 4 17:29:08.445071 containerd[1690]: time="2024-09-04T17:29:08.444930632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dhgqf,Uid:4c2647e3-902b-4afd-82a0-7d9247354ab6,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:08.445463 containerd[1690]: time="2024-09-04T17:29:08.445401036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fnc7v,Uid:40c76adf-c41f-4f0d-a3b3-98fb776074be,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:08.618338 systemd[1]: Created slice kubepods-besteffort-podae313f17_0269_49cd_93a7_cf8ff23b72b7.slice - libcontainer container kubepods-besteffort-podae313f17_0269_49cd_93a7_cf8ff23b72b7.slice. Sep 4 17:29:08.620787 containerd[1690]: time="2024-09-04T17:29:08.620748390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b62lq,Uid:ae313f17-0269-49cd-93a7-cf8ff23b72b7,Namespace:calico-system,Attempt:0,}" Sep 4 17:29:09.768094 containerd[1690]: time="2024-09-04T17:29:09.768024303Z" level=info msg="shim disconnected" id=f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990 namespace=k8s.io Sep 4 17:29:09.768094 containerd[1690]: time="2024-09-04T17:29:09.768081703Z" level=warning msg="cleaning up after shim disconnected" id=f703d098c56b2024d613685603440bb33440a9dcc64e7bc35619558062294990 namespace=k8s.io Sep 4 17:29:09.768094 containerd[1690]: time="2024-09-04T17:29:09.768093603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:09.978701 containerd[1690]: time="2024-09-04T17:29:09.977971944Z" level=error msg="Failed to destroy network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:09.979080 containerd[1690]: time="2024-09-04T17:29:09.979028052Z" level=error msg="encountered an error cleaning up failed sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:09.979187 containerd[1690]: time="2024-09-04T17:29:09.979118153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dhgqf,Uid:4c2647e3-902b-4afd-82a0-7d9247354ab6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:09.979967 kubelet[3182]: E0904 17:29:09.979433 3182 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:09.979967 kubelet[3182]: E0904 17:29:09.979505 3182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-dhgqf" Sep 4 17:29:09.979967 kubelet[3182]: E0904 17:29:09.979536 3182 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-dhgqf" Sep 4 17:29:09.980411 kubelet[3182]: E0904 17:29:09.979616 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-dhgqf_kube-system(4c2647e3-902b-4afd-82a0-7d9247354ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-dhgqf_kube-system(4c2647e3-902b-4afd-82a0-7d9247354ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dhgqf" podUID="4c2647e3-902b-4afd-82a0-7d9247354ab6" Sep 4 17:29:10.020877 containerd[1690]: time="2024-09-04T17:29:10.020493096Z" level=error msg="Failed to destroy network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.021564 containerd[1690]: time="2024-09-04T17:29:10.021413404Z" level=error msg="encountered an error cleaning up failed sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.021754 containerd[1690]: time="2024-09-04T17:29:10.021656906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fnc7v,Uid:40c76adf-c41f-4f0d-a3b3-98fb776074be,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.021947 containerd[1690]: time="2024-09-04T17:29:10.021678006Z" level=error msg="Failed to destroy network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.022608 kubelet[3182]: E0904 17:29:10.022570 3182 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.022704 kubelet[3182]: E0904 17:29:10.022629 3182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fnc7v" Sep 4 17:29:10.022704 kubelet[3182]: E0904 17:29:10.022655 3182 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fnc7v" Sep 4 17:29:10.023047 kubelet[3182]: E0904 17:29:10.022710 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-fnc7v_kube-system(40c76adf-c41f-4f0d-a3b3-98fb776074be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-fnc7v_kube-system(40c76adf-c41f-4f0d-a3b3-98fb776074be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fnc7v" podUID="40c76adf-c41f-4f0d-a3b3-98fb776074be" Sep 4 17:29:10.024115 containerd[1690]: time="2024-09-04T17:29:10.024079426Z" level=error msg="encountered an error cleaning up failed sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.024297 containerd[1690]: time="2024-09-04T17:29:10.024133326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4b665f85-mmtlg,Uid:5c37808a-484d-4381-8069-9b46cdacb5ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.024488 kubelet[3182]: E0904 17:29:10.024339 3182 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.024488 kubelet[3182]: E0904 17:29:10.024387 3182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4b665f85-mmtlg" Sep 4 17:29:10.024488 kubelet[3182]: E0904 17:29:10.024414 3182 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4b665f85-mmtlg" Sep 4 17:29:10.024943 kubelet[3182]: E0904 17:29:10.024476 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4b665f85-mmtlg_calico-system(5c37808a-484d-4381-8069-9b46cdacb5ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4b665f85-mmtlg_calico-system(5c37808a-484d-4381-8069-9b46cdacb5ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4b665f85-mmtlg" podUID="5c37808a-484d-4381-8069-9b46cdacb5ee" Sep 4 17:29:10.030656 containerd[1690]: time="2024-09-04T17:29:10.030624580Z" level=error msg="Failed to destroy network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.030918 containerd[1690]: time="2024-09-04T17:29:10.030890982Z" level=error msg="encountered an error cleaning up failed sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.031027 containerd[1690]: time="2024-09-04T17:29:10.030933383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b62lq,Uid:ae313f17-0269-49cd-93a7-cf8ff23b72b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.031124 kubelet[3182]: E0904 17:29:10.031081 3182 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.031191 kubelet[3182]: E0904 17:29:10.031142 3182 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b62lq" Sep 4 17:29:10.031191 kubelet[3182]: E0904 17:29:10.031169 3182 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b62lq" Sep 4 17:29:10.031291 kubelet[3182]: E0904 17:29:10.031237 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b62lq_calico-system(ae313f17-0269-49cd-93a7-cf8ff23b72b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b62lq_calico-system(ae313f17-0269-49cd-93a7-cf8ff23b72b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b62lq" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" Sep 4 17:29:10.735878 kubelet[3182]: I0904 17:29:10.735703 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:10.737205 containerd[1690]: time="2024-09-04T17:29:10.736583334Z" level=info msg="StopPodSandbox for \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\"" Sep 4 17:29:10.737205 containerd[1690]: time="2024-09-04T17:29:10.736904436Z" level=info msg="Ensure that sandbox 6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f in task-service has been cleanup successfully" Sep 4 17:29:10.737829 kubelet[3182]: I0904 17:29:10.737797 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:10.739776 containerd[1690]: time="2024-09-04T17:29:10.739743360Z" level=info msg="StopPodSandbox for \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\"" Sep 4 17:29:10.740668 kubelet[3182]: I0904 17:29:10.740652 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:10.741179 containerd[1690]: time="2024-09-04T17:29:10.740952970Z" level=info msg="Ensure that sandbox c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b in task-service has been cleanup successfully" Sep 4 17:29:10.741690 containerd[1690]: time="2024-09-04T17:29:10.741656876Z" level=info msg="StopPodSandbox for \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\"" Sep 4 17:29:10.742953 containerd[1690]: time="2024-09-04T17:29:10.742927486Z" level=info msg="Ensure that sandbox a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e in task-service has been cleanup successfully" Sep 4 17:29:10.748528 kubelet[3182]: I0904 17:29:10.748465 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:10.749027 containerd[1690]: time="2024-09-04T17:29:10.748877736Z" level=info msg="StopPodSandbox for \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\"" Sep 4 17:29:10.749103 containerd[1690]: time="2024-09-04T17:29:10.749081137Z" level=info msg="Ensure that sandbox 90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7 in task-service has been cleanup successfully" Sep 4 17:29:10.752053 containerd[1690]: time="2024-09-04T17:29:10.750987553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:29:10.816163 containerd[1690]: time="2024-09-04T17:29:10.816087893Z" level=error msg="StopPodSandbox for \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\" failed" error="failed to destroy network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.816584 kubelet[3182]: E0904 17:29:10.816445 3182 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:10.816584 kubelet[3182]: E0904 17:29:10.816522 3182 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f"} Sep 4 17:29:10.816584 kubelet[3182]: E0904 17:29:10.816564 3182 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40c76adf-c41f-4f0d-a3b3-98fb776074be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:29:10.816807 kubelet[3182]: E0904 17:29:10.816601 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40c76adf-c41f-4f0d-a3b3-98fb776074be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fnc7v" podUID="40c76adf-c41f-4f0d-a3b3-98fb776074be" Sep 4 17:29:10.819928 containerd[1690]: time="2024-09-04T17:29:10.819881825Z" level=error msg="StopPodSandbox for \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\" failed" error="failed to destroy network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.820284 kubelet[3182]: E0904 17:29:10.820110 3182 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:10.820284 kubelet[3182]: E0904 17:29:10.820148 3182 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7"} Sep 4 17:29:10.820284 kubelet[3182]: E0904 17:29:10.820191 3182 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae313f17-0269-49cd-93a7-cf8ff23b72b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:29:10.820284 kubelet[3182]: E0904 17:29:10.820225 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae313f17-0269-49cd-93a7-cf8ff23b72b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b62lq" podUID="ae313f17-0269-49cd-93a7-cf8ff23b72b7" Sep 4 17:29:10.835412 containerd[1690]: time="2024-09-04T17:29:10.834995650Z" level=error msg="StopPodSandbox for \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\" failed" error="failed to destroy network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.835517 kubelet[3182]: E0904 17:29:10.835219 3182 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:10.835517 kubelet[3182]: E0904 17:29:10.835250 3182 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b"} Sep 4 17:29:10.835517 kubelet[3182]: E0904 17:29:10.835294 3182 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c2647e3-902b-4afd-82a0-7d9247354ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:29:10.835517 kubelet[3182]: E0904 17:29:10.835340 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c2647e3-902b-4afd-82a0-7d9247354ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dhgqf" podUID="4c2647e3-902b-4afd-82a0-7d9247354ab6" Sep 4 17:29:10.835882 containerd[1690]: time="2024-09-04T17:29:10.835811457Z" level=error msg="StopPodSandbox for \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\" failed" error="failed to destroy network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:29:10.836036 kubelet[3182]: E0904 17:29:10.836013 3182 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:10.836109 kubelet[3182]: E0904 17:29:10.836046 3182 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e"} Sep 4 17:29:10.836109 kubelet[3182]: E0904 17:29:10.836085 3182 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c37808a-484d-4381-8069-9b46cdacb5ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:29:10.836206 kubelet[3182]: E0904 17:29:10.836121 3182 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c37808a-484d-4381-8069-9b46cdacb5ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4b665f85-mmtlg" podUID="5c37808a-484d-4381-8069-9b46cdacb5ee" Sep 4 17:29:10.860167 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f-shm.mount: Deactivated successfully. Sep 4 17:29:10.860305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e-shm.mount: Deactivated successfully. Sep 4 17:29:10.860387 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b-shm.mount: Deactivated successfully. Sep 4 17:29:15.845493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873065210.mount: Deactivated successfully. Sep 4 17:29:15.904587 containerd[1690]: time="2024-09-04T17:29:15.904540273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:15.906812 containerd[1690]: time="2024-09-04T17:29:15.906754493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:29:15.912976 containerd[1690]: time="2024-09-04T17:29:15.912927647Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:15.917959 containerd[1690]: time="2024-09-04T17:29:15.917780889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:15.919063 containerd[1690]: time="2024-09-04T17:29:15.918902099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.167876146s" Sep 4 17:29:15.919063 containerd[1690]: time="2024-09-04T17:29:15.918939999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:29:15.935115 containerd[1690]: time="2024-09-04T17:29:15.934927740Z" level=info msg="CreateContainer within sandbox \"d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:29:15.987632 containerd[1690]: time="2024-09-04T17:29:15.987597001Z" level=info msg="CreateContainer within sandbox \"d2c66950bc1f96a8b9aaecf487d8dc6ae62623c551edbc0696bf17ee809a522e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"54b997c9253ae2605a9b8a79d7d9642c288e9afb515c755eb8a61f8503973c3c\"" Sep 4 17:29:15.988678 containerd[1690]: time="2024-09-04T17:29:15.987999405Z" level=info msg="StartContainer for \"54b997c9253ae2605a9b8a79d7d9642c288e9afb515c755eb8a61f8503973c3c\"" Sep 4 17:29:16.014266 systemd[1]: Started cri-containerd-54b997c9253ae2605a9b8a79d7d9642c288e9afb515c755eb8a61f8503973c3c.scope - libcontainer container 54b997c9253ae2605a9b8a79d7d9642c288e9afb515c755eb8a61f8503973c3c. Sep 4 17:29:16.042801 containerd[1690]: time="2024-09-04T17:29:16.042745885Z" level=info msg="StartContainer for \"54b997c9253ae2605a9b8a79d7d9642c288e9afb515c755eb8a61f8503973c3c\" returns successfully" Sep 4 17:29:16.566701 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:29:16.566815 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:29:16.783762 kubelet[3182]: I0904 17:29:16.783720 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-ph7q9" podStartSLOduration=1.380264363 podCreationTimestamp="2024-09-04 17:28:57 +0000 UTC" firstStartedPulling="2024-09-04 17:28:57.515793384 +0000 UTC m=+19.990036784" lastFinishedPulling="2024-09-04 17:29:15.919204202 +0000 UTC m=+38.393447502" observedRunningTime="2024-09-04 17:29:16.783158776 +0000 UTC m=+39.257402076" watchObservedRunningTime="2024-09-04 17:29:16.783675081 +0000 UTC m=+39.257918481" Sep 4 17:29:18.580389 kubelet[3182]: I0904 17:29:18.580265 3182 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:29:23.615139 containerd[1690]: time="2024-09-04T17:29:23.615073258Z" level=info msg="StopPodSandbox for \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\"" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.652 [INFO][4401] k8s.go 608: Cleaning up netns ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.653 [INFO][4401] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" iface="eth0" netns="/var/run/netns/cni-5afc1778-fcd3-c87c-bd1a-e8cfc01bbfca" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.654 [INFO][4401] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" iface="eth0" netns="/var/run/netns/cni-5afc1778-fcd3-c87c-bd1a-e8cfc01bbfca" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.654 [INFO][4401] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" iface="eth0" netns="/var/run/netns/cni-5afc1778-fcd3-c87c-bd1a-e8cfc01bbfca" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.654 [INFO][4401] k8s.go 615: Releasing IP address(es) ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.654 [INFO][4401] utils.go 188: Calico CNI releasing IP address ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.672 [INFO][4407] ipam_plugin.go 417: Releasing address using handleID ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.672 [INFO][4407] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.672 [INFO][4407] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.676 [WARNING][4407] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.676 [INFO][4407] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.678 [INFO][4407] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:23.680711 containerd[1690]: 2024-09-04 17:29:23.679 [INFO][4401] k8s.go 621: Teardown processing complete. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:23.682628 containerd[1690]: time="2024-09-04T17:29:23.680856373Z" level=info msg="TearDown network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\" successfully" Sep 4 17:29:23.682628 containerd[1690]: time="2024-09-04T17:29:23.680888573Z" level=info msg="StopPodSandbox for \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\" returns successfully" Sep 4 17:29:23.682628 containerd[1690]: time="2024-09-04T17:29:23.681524778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fnc7v,Uid:40c76adf-c41f-4f0d-a3b3-98fb776074be,Namespace:kube-system,Attempt:1,}" Sep 4 17:29:23.685938 systemd[1]: run-netns-cni\x2d5afc1778\x2dfcd3\x2dc87c\x2dbd1a\x2de8cfc01bbfca.mount: Deactivated successfully. Sep 4 17:29:23.822712 systemd-networkd[1554]: cali9adc739ea14: Link UP Sep 4 17:29:23.822978 systemd-networkd[1554]: cali9adc739ea14: Gained carrier Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.743 [INFO][4414] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.751 [INFO][4414] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0 coredns-5dd5756b68- kube-system 40c76adf-c41f-4f0d-a3b3-98fb776074be 682 0 2024-09-04 17:28:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-a-1f7e34d344 coredns-5dd5756b68-fnc7v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9adc739ea14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Namespace="kube-system" Pod="coredns-5dd5756b68-fnc7v" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.751 [INFO][4414] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Namespace="kube-system" Pod="coredns-5dd5756b68-fnc7v" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.778 [INFO][4424] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" HandleID="k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.785 [INFO][4424] ipam_plugin.go 270: Auto assigning IP ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" HandleID="k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ff890), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-a-1f7e34d344", "pod":"coredns-5dd5756b68-fnc7v", "timestamp":"2024-09-04 17:29:23.778069734 +0000 UTC"}, Hostname:"ci-3975.2.1-a-1f7e34d344", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.786 [INFO][4424] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.786 [INFO][4424] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.786 [INFO][4424] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-1f7e34d344' Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.787 [INFO][4424] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.791 [INFO][4424] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.795 [INFO][4424] ipam.go 489: Trying affinity for 192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.796 [INFO][4424] ipam.go 155: Attempting to load block cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.798 [INFO][4424] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.798 [INFO][4424] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.800 [INFO][4424] ipam.go 1685: Creating new handle: k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1 Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.803 [INFO][4424] ipam.go 1203: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.807 [INFO][4424] ipam.go 1216: Successfully claimed IPs: [192.168.76.65/26] block=192.168.76.64/26 handle="k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.807 [INFO][4424] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.65/26] handle="k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.807 [INFO][4424] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:23.833331 containerd[1690]: 2024-09-04 17:29:23.808 [INFO][4424] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.76.65/26] IPv6=[] ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" HandleID="k8s-pod-network.405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.835453 containerd[1690]: 2024-09-04 17:29:23.809 [INFO][4414] k8s.go 386: Populated endpoint ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Namespace="kube-system" Pod="coredns-5dd5756b68-fnc7v" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"40c76adf-c41f-4f0d-a3b3-98fb776074be", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"", Pod:"coredns-5dd5756b68-fnc7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9adc739ea14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:23.835453 containerd[1690]: 2024-09-04 17:29:23.809 [INFO][4414] k8s.go 387: Calico CNI using IPs: [192.168.76.65/32] ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Namespace="kube-system" Pod="coredns-5dd5756b68-fnc7v" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.835453 containerd[1690]: 2024-09-04 17:29:23.809 [INFO][4414] dataplane_linux.go 68: Setting the host side veth name to cali9adc739ea14 ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Namespace="kube-system" Pod="coredns-5dd5756b68-fnc7v" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.835453 containerd[1690]: 2024-09-04 17:29:23.820 [INFO][4414] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Namespace="kube-system" Pod="coredns-5dd5756b68-fnc7v" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.835453 containerd[1690]: 2024-09-04 17:29:23.821 [INFO][4414] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Namespace="kube-system" Pod="coredns-5dd5756b68-fnc7v" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"40c76adf-c41f-4f0d-a3b3-98fb776074be", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1", Pod:"coredns-5dd5756b68-fnc7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9adc739ea14", MAC:"5e:f6:f0:d4:91:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:23.835453 containerd[1690]: 2024-09-04 17:29:23.830 [INFO][4414] k8s.go 500: Wrote updated endpoint to datastore ContainerID="405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1" Namespace="kube-system" Pod="coredns-5dd5756b68-fnc7v" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:23.861389 containerd[1690]: time="2024-09-04T17:29:23.860938082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:23.861389 containerd[1690]: time="2024-09-04T17:29:23.861033183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:23.861389 containerd[1690]: time="2024-09-04T17:29:23.861086983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:23.861389 containerd[1690]: time="2024-09-04T17:29:23.861107183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:23.887990 systemd[1]: Started cri-containerd-405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1.scope - libcontainer container 405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1. Sep 4 17:29:23.923549 containerd[1690]: time="2024-09-04T17:29:23.923500371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fnc7v,Uid:40c76adf-c41f-4f0d-a3b3-98fb776074be,Namespace:kube-system,Attempt:1,} returns sandbox id \"405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1\"" Sep 4 17:29:23.926586 containerd[1690]: time="2024-09-04T17:29:23.926516795Z" level=info msg="CreateContainer within sandbox \"405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:29:23.966965 containerd[1690]: time="2024-09-04T17:29:23.966929311Z" level=info msg="CreateContainer within sandbox \"405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f3b5dd465b68c9d67f21ba4cb3b97d24b3351300989c94925df4b4a8aeb9907\"" Sep 4 17:29:23.967393 containerd[1690]: time="2024-09-04T17:29:23.967335114Z" level=info msg="StartContainer for \"0f3b5dd465b68c9d67f21ba4cb3b97d24b3351300989c94925df4b4a8aeb9907\"" Sep 4 17:29:23.991300 systemd[1]: Started cri-containerd-0f3b5dd465b68c9d67f21ba4cb3b97d24b3351300989c94925df4b4a8aeb9907.scope - libcontainer container 0f3b5dd465b68c9d67f21ba4cb3b97d24b3351300989c94925df4b4a8aeb9907. Sep 4 17:29:24.014832 containerd[1690]: time="2024-09-04T17:29:24.014741485Z" level=info msg="StartContainer for \"0f3b5dd465b68c9d67f21ba4cb3b97d24b3351300989c94925df4b4a8aeb9907\" returns successfully" Sep 4 17:29:24.794364 kubelet[3182]: I0904 17:29:24.794031 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fnc7v" podStartSLOduration=33.793987582 podCreationTimestamp="2024-09-04 17:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:24.793377677 +0000 UTC m=+47.267621077" watchObservedRunningTime="2024-09-04 17:29:24.793987582 +0000 UTC m=+47.268230882" Sep 4 17:29:25.523102 systemd-networkd[1554]: cali9adc739ea14: Gained IPv6LL Sep 4 17:29:25.617256 containerd[1690]: time="2024-09-04T17:29:25.617203322Z" level=info msg="StopPodSandbox for \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\"" Sep 4 17:29:25.618712 containerd[1690]: time="2024-09-04T17:29:25.618257630Z" level=info msg="StopPodSandbox for \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\"" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.677 [INFO][4597] k8s.go 608: Cleaning up netns ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.677 [INFO][4597] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" iface="eth0" netns="/var/run/netns/cni-a38a9bd9-d56a-7b89-3025-d9daa71961bb" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.677 [INFO][4597] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" iface="eth0" netns="/var/run/netns/cni-a38a9bd9-d56a-7b89-3025-d9daa71961bb" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.678 [INFO][4597] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" iface="eth0" netns="/var/run/netns/cni-a38a9bd9-d56a-7b89-3025-d9daa71961bb" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.678 [INFO][4597] k8s.go 615: Releasing IP address(es) ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.678 [INFO][4597] utils.go 188: Calico CNI releasing IP address ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.705 [INFO][4615] ipam_plugin.go 417: Releasing address using handleID ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.705 [INFO][4615] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.705 [INFO][4615] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.710 [WARNING][4615] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.710 [INFO][4615] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.711 [INFO][4615] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:25.713841 containerd[1690]: 2024-09-04 17:29:25.712 [INFO][4597] k8s.go 621: Teardown processing complete. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:25.714801 containerd[1690]: time="2024-09-04T17:29:25.714636384Z" level=info msg="TearDown network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\" successfully" Sep 4 17:29:25.714801 containerd[1690]: time="2024-09-04T17:29:25.714670284Z" level=info msg="StopPodSandbox for \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\" returns successfully" Sep 4 17:29:25.718880 containerd[1690]: time="2024-09-04T17:29:25.718502714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4b665f85-mmtlg,Uid:5c37808a-484d-4381-8069-9b46cdacb5ee,Namespace:calico-system,Attempt:1,}" Sep 4 17:29:25.720745 systemd[1]: run-netns-cni\x2da38a9bd9\x2dd56a\x2d7b89\x2d3025\x2dd9daa71961bb.mount: Deactivated successfully. Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.670 [INFO][4598] k8s.go 608: Cleaning up netns ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.670 [INFO][4598] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" iface="eth0" netns="/var/run/netns/cni-2b803bdf-1c85-6559-6f37-dcdca97b0733" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.671 [INFO][4598] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" iface="eth0" netns="/var/run/netns/cni-2b803bdf-1c85-6559-6f37-dcdca97b0733" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.671 [INFO][4598] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" iface="eth0" netns="/var/run/netns/cni-2b803bdf-1c85-6559-6f37-dcdca97b0733" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.672 [INFO][4598] k8s.go 615: Releasing IP address(es) ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.672 [INFO][4598] utils.go 188: Calico CNI releasing IP address ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.706 [INFO][4611] ipam_plugin.go 417: Releasing address using handleID ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.706 [INFO][4611] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.711 [INFO][4611] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.720 [WARNING][4611] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.720 [INFO][4611] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.722 [INFO][4611] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:25.724441 containerd[1690]: 2024-09-04 17:29:25.723 [INFO][4598] k8s.go 621: Teardown processing complete. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:25.724980 containerd[1690]: time="2024-09-04T17:29:25.724602662Z" level=info msg="TearDown network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\" successfully" Sep 4 17:29:25.724980 containerd[1690]: time="2024-09-04T17:29:25.724638462Z" level=info msg="StopPodSandbox for \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\" returns successfully" Sep 4 17:29:25.727249 containerd[1690]: time="2024-09-04T17:29:25.727215283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dhgqf,Uid:4c2647e3-902b-4afd-82a0-7d9247354ab6,Namespace:kube-system,Attempt:1,}" Sep 4 17:29:25.729029 systemd[1]: run-netns-cni\x2d2b803bdf\x2d1c85\x2d6559\x2d6f37\x2ddcdca97b0733.mount: Deactivated successfully. Sep 4 17:29:25.915141 systemd-networkd[1554]: cali55878149bae: Link UP Sep 4 17:29:25.915406 systemd-networkd[1554]: cali55878149bae: Gained carrier Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.824 [INFO][4623] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.835 [INFO][4623] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0 calico-kube-controllers-c4b665f85- calico-system 5c37808a-484d-4381-8069-9b46cdacb5ee 705 0 2024-09-04 17:28:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c4b665f85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.1-a-1f7e34d344 calico-kube-controllers-c4b665f85-mmtlg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali55878149bae [] []}} ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Namespace="calico-system" Pod="calico-kube-controllers-c4b665f85-mmtlg" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.835 [INFO][4623] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Namespace="calico-system" Pod="calico-kube-controllers-c4b665f85-mmtlg" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.870 [INFO][4647] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" HandleID="k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.879 [INFO][4647] ipam_plugin.go 270: Auto assigning IP ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" HandleID="k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ff860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-a-1f7e34d344", "pod":"calico-kube-controllers-c4b665f85-mmtlg", "timestamp":"2024-09-04 17:29:25.870986707 +0000 UTC"}, Hostname:"ci-3975.2.1-a-1f7e34d344", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.879 [INFO][4647] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.880 [INFO][4647] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.880 [INFO][4647] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-1f7e34d344' Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.881 [INFO][4647] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.885 [INFO][4647] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.888 [INFO][4647] ipam.go 489: Trying affinity for 192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.890 [INFO][4647] ipam.go 155: Attempting to load block cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.892 [INFO][4647] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.892 [INFO][4647] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.894 [INFO][4647] ipam.go 1685: Creating new handle: k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599 Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.900 [INFO][4647] ipam.go 1203: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.904 [INFO][4647] ipam.go 1216: Successfully claimed IPs: [192.168.76.66/26] block=192.168.76.64/26 handle="k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.904 [INFO][4647] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.66/26] handle="k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.904 [INFO][4647] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:25.927640 containerd[1690]: 2024-09-04 17:29:25.904 [INFO][4647] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.76.66/26] IPv6=[] ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" HandleID="k8s-pod-network.e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.928697 containerd[1690]: 2024-09-04 17:29:25.907 [INFO][4623] k8s.go 386: Populated endpoint ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Namespace="calico-system" Pod="calico-kube-controllers-c4b665f85-mmtlg" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0", GenerateName:"calico-kube-controllers-c4b665f85-", Namespace:"calico-system", SelfLink:"", UID:"5c37808a-484d-4381-8069-9b46cdacb5ee", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4b665f85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"", Pod:"calico-kube-controllers-c4b665f85-mmtlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55878149bae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:25.928697 containerd[1690]: 2024-09-04 17:29:25.907 [INFO][4623] k8s.go 387: Calico CNI using IPs: [192.168.76.66/32] ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Namespace="calico-system" Pod="calico-kube-controllers-c4b665f85-mmtlg" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.928697 containerd[1690]: 2024-09-04 17:29:25.907 [INFO][4623] dataplane_linux.go 68: Setting the host side veth name to cali55878149bae ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Namespace="calico-system" Pod="calico-kube-controllers-c4b665f85-mmtlg" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.928697 containerd[1690]: 2024-09-04 17:29:25.916 [INFO][4623] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Namespace="calico-system" Pod="calico-kube-controllers-c4b665f85-mmtlg" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.928697 containerd[1690]: 2024-09-04 17:29:25.916 [INFO][4623] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Namespace="calico-system" Pod="calico-kube-controllers-c4b665f85-mmtlg" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0", GenerateName:"calico-kube-controllers-c4b665f85-", Namespace:"calico-system", SelfLink:"", UID:"5c37808a-484d-4381-8069-9b46cdacb5ee", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4b665f85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599", Pod:"calico-kube-controllers-c4b665f85-mmtlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55878149bae", MAC:"42:5b:34:1a:2d:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:25.928697 containerd[1690]: 2024-09-04 17:29:25.924 [INFO][4623] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599" Namespace="calico-system" Pod="calico-kube-controllers-c4b665f85-mmtlg" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:25.947544 systemd-networkd[1554]: cali9fbc1f7cece: Link UP Sep 4 17:29:25.948814 systemd-networkd[1554]: cali9fbc1f7cece: Gained carrier Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.826 [INFO][4628] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.837 [INFO][4628] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0 coredns-5dd5756b68- kube-system 4c2647e3-902b-4afd-82a0-7d9247354ab6 704 0 2024-09-04 17:28:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-a-1f7e34d344 coredns-5dd5756b68-dhgqf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9fbc1f7cece [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Namespace="kube-system" Pod="coredns-5dd5756b68-dhgqf" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.837 [INFO][4628] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Namespace="kube-system" Pod="coredns-5dd5756b68-dhgqf" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.876 [INFO][4652] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" HandleID="k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.884 [INFO][4652] ipam_plugin.go 270: Auto assigning IP ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" HandleID="k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efde0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-a-1f7e34d344", "pod":"coredns-5dd5756b68-dhgqf", "timestamp":"2024-09-04 17:29:25.876129748 +0000 UTC"}, Hostname:"ci-3975.2.1-a-1f7e34d344", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.884 [INFO][4652] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.904 [INFO][4652] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.904 [INFO][4652] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-1f7e34d344' Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.906 [INFO][4652] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.914 [INFO][4652] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.920 [INFO][4652] ipam.go 489: Trying affinity for 192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.927 [INFO][4652] ipam.go 155: Attempting to load block cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.932 [INFO][4652] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.932 [INFO][4652] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.933 [INFO][4652] ipam.go 1685: Creating new handle: k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336 Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.938 [INFO][4652] ipam.go 1203: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.942 [INFO][4652] ipam.go 1216: Successfully claimed IPs: [192.168.76.67/26] block=192.168.76.64/26 handle="k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.942 [INFO][4652] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.67/26] handle="k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.942 [INFO][4652] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:25.967168 containerd[1690]: 2024-09-04 17:29:25.942 [INFO][4652] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.76.67/26] IPv6=[] ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" HandleID="k8s-pod-network.229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.968674 containerd[1690]: 2024-09-04 17:29:25.944 [INFO][4628] k8s.go 386: Populated endpoint ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Namespace="kube-system" Pod="coredns-5dd5756b68-dhgqf" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4c2647e3-902b-4afd-82a0-7d9247354ab6", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"", Pod:"coredns-5dd5756b68-dhgqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fbc1f7cece", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:25.968674 containerd[1690]: 2024-09-04 17:29:25.945 [INFO][4628] k8s.go 387: Calico CNI using IPs: [192.168.76.67/32] ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Namespace="kube-system" Pod="coredns-5dd5756b68-dhgqf" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.968674 containerd[1690]: 2024-09-04 17:29:25.945 [INFO][4628] dataplane_linux.go 68: Setting the host side veth name to cali9fbc1f7cece ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Namespace="kube-system" Pod="coredns-5dd5756b68-dhgqf" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.968674 containerd[1690]: 2024-09-04 17:29:25.946 [INFO][4628] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Namespace="kube-system" Pod="coredns-5dd5756b68-dhgqf" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.968674 containerd[1690]: 2024-09-04 17:29:25.947 [INFO][4628] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Namespace="kube-system" Pod="coredns-5dd5756b68-dhgqf" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4c2647e3-902b-4afd-82a0-7d9247354ab6", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336", Pod:"coredns-5dd5756b68-dhgqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fbc1f7cece", MAC:"86:50:22:70:00:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:25.968674 containerd[1690]: 2024-09-04 17:29:25.965 [INFO][4628] k8s.go 500: Wrote updated endpoint to datastore ContainerID="229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336" Namespace="kube-system" Pod="coredns-5dd5756b68-dhgqf" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:25.975163 containerd[1690]: time="2024-09-04T17:29:25.974978121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:25.975163 containerd[1690]: time="2024-09-04T17:29:25.975081822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:25.975408 containerd[1690]: time="2024-09-04T17:29:25.975109722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:25.975408 containerd[1690]: time="2024-09-04T17:29:25.975191423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:25.996227 systemd[1]: Started cri-containerd-e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599.scope - libcontainer container e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599. Sep 4 17:29:26.003669 containerd[1690]: time="2024-09-04T17:29:26.002884239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:26.003669 containerd[1690]: time="2024-09-04T17:29:26.002944340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:26.003669 containerd[1690]: time="2024-09-04T17:29:26.002968440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:26.003669 containerd[1690]: time="2024-09-04T17:29:26.002985540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:26.028153 systemd[1]: Started cri-containerd-229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336.scope - libcontainer container 229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336. Sep 4 17:29:26.061648 containerd[1690]: time="2024-09-04T17:29:26.061606099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4b665f85-mmtlg,Uid:5c37808a-484d-4381-8069-9b46cdacb5ee,Namespace:calico-system,Attempt:1,} returns sandbox id \"e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599\"" Sep 4 17:29:26.065522 containerd[1690]: time="2024-09-04T17:29:26.065371028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:29:26.076505 containerd[1690]: time="2024-09-04T17:29:26.076479915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dhgqf,Uid:4c2647e3-902b-4afd-82a0-7d9247354ab6,Namespace:kube-system,Attempt:1,} returns sandbox id \"229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336\"" Sep 4 17:29:26.079330 containerd[1690]: time="2024-09-04T17:29:26.079289837Z" level=info msg="CreateContainer within sandbox \"229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:29:26.120225 containerd[1690]: time="2024-09-04T17:29:26.120188457Z" level=info msg="CreateContainer within sandbox \"229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e3e7e8e4693c9edcd0a0ce2d150553c30aa0faf48edac3bbe308dfb1bd5be189\"" Sep 4 17:29:26.121135 containerd[1690]: time="2024-09-04T17:29:26.120683061Z" level=info msg="StartContainer for \"e3e7e8e4693c9edcd0a0ce2d150553c30aa0faf48edac3bbe308dfb1bd5be189\"" Sep 4 17:29:26.147028 systemd[1]: Started cri-containerd-e3e7e8e4693c9edcd0a0ce2d150553c30aa0faf48edac3bbe308dfb1bd5be189.scope - libcontainer container e3e7e8e4693c9edcd0a0ce2d150553c30aa0faf48edac3bbe308dfb1bd5be189. Sep 4 17:29:26.174834 containerd[1690]: time="2024-09-04T17:29:26.173418473Z" level=info msg="StartContainer for \"e3e7e8e4693c9edcd0a0ce2d150553c30aa0faf48edac3bbe308dfb1bd5be189\" returns successfully" Sep 4 17:29:26.441242 kubelet[3182]: I0904 17:29:26.440917 3182 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:29:26.616218 containerd[1690]: time="2024-09-04T17:29:26.615495632Z" level=info msg="StopPodSandbox for \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\"" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.660 [INFO][4855] k8s.go 608: Cleaning up netns ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.660 [INFO][4855] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" iface="eth0" netns="/var/run/netns/cni-33b51e57-2680-32b0-20fb-0ed4f778d383" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.660 [INFO][4855] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" iface="eth0" netns="/var/run/netns/cni-33b51e57-2680-32b0-20fb-0ed4f778d383" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.660 [INFO][4855] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" iface="eth0" netns="/var/run/netns/cni-33b51e57-2680-32b0-20fb-0ed4f778d383" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.661 [INFO][4855] k8s.go 615: Releasing IP address(es) ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.661 [INFO][4855] utils.go 188: Calico CNI releasing IP address ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.680 [INFO][4861] ipam_plugin.go 417: Releasing address using handleID ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.680 [INFO][4861] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.680 [INFO][4861] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.685 [WARNING][4861] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.685 [INFO][4861] ipam_plugin.go 445: Releasing address using workloadID ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.686 [INFO][4861] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:26.689707 containerd[1690]: 2024-09-04 17:29:26.688 [INFO][4855] k8s.go 621: Teardown processing complete. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:26.690684 containerd[1690]: time="2024-09-04T17:29:26.689877214Z" level=info msg="TearDown network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\" successfully" Sep 4 17:29:26.690684 containerd[1690]: time="2024-09-04T17:29:26.689915914Z" level=info msg="StopPodSandbox for \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\" returns successfully" Sep 4 17:29:26.691039 containerd[1690]: time="2024-09-04T17:29:26.690994123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b62lq,Uid:ae313f17-0269-49cd-93a7-cf8ff23b72b7,Namespace:calico-system,Attempt:1,}" Sep 4 17:29:26.723565 systemd[1]: run-netns-cni\x2d33b51e57\x2d2680\x2d32b0\x2d20fb\x2d0ed4f778d383.mount: Deactivated successfully. Sep 4 17:29:26.815947 kubelet[3182]: I0904 17:29:26.813910 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-dhgqf" podStartSLOduration=35.813866984 podCreationTimestamp="2024-09-04 17:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:26.811547566 +0000 UTC m=+49.285790866" watchObservedRunningTime="2024-09-04 17:29:26.813866984 +0000 UTC m=+49.288110284" Sep 4 17:29:26.872643 systemd-networkd[1554]: calid74373811aa: Link UP Sep 4 17:29:26.873986 systemd-networkd[1554]: calid74373811aa: Gained carrier Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.772 [INFO][4867] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.780 [INFO][4867] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0 csi-node-driver- calico-system ae313f17-0269-49cd-93a7-cf8ff23b72b7 725 0 2024-09-04 17:28:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.1-a-1f7e34d344 csi-node-driver-b62lq eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid74373811aa [] []}} ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Namespace="calico-system" Pod="csi-node-driver-b62lq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.780 [INFO][4867] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Namespace="calico-system" Pod="csi-node-driver-b62lq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.814 [INFO][4878] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" HandleID="k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.827 [INFO][4878] ipam_plugin.go 270: Auto assigning IP ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" HandleID="k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-a-1f7e34d344", "pod":"csi-node-driver-b62lq", "timestamp":"2024-09-04 17:29:26.814961893 +0000 UTC"}, Hostname:"ci-3975.2.1-a-1f7e34d344", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.827 [INFO][4878] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.828 [INFO][4878] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.828 [INFO][4878] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-1f7e34d344' Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.833 [INFO][4878] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.853 [INFO][4878] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.856 [INFO][4878] ipam.go 489: Trying affinity for 192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.859 [INFO][4878] ipam.go 155: Attempting to load block cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.860 [INFO][4878] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.861 [INFO][4878] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.862 [INFO][4878] ipam.go 1685: Creating new handle: k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60 Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.864 [INFO][4878] ipam.go 1203: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.869 [INFO][4878] ipam.go 1216: Successfully claimed IPs: [192.168.76.68/26] block=192.168.76.64/26 handle="k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.869 [INFO][4878] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.68/26] handle="k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.869 [INFO][4878] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:26.889135 containerd[1690]: 2024-09-04 17:29:26.869 [INFO][4878] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.76.68/26] IPv6=[] ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" HandleID="k8s-pod-network.46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.890286 containerd[1690]: 2024-09-04 17:29:26.870 [INFO][4867] k8s.go 386: Populated endpoint ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Namespace="calico-system" Pod="csi-node-driver-b62lq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae313f17-0269-49cd-93a7-cf8ff23b72b7", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"", Pod:"csi-node-driver-b62lq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid74373811aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:26.890286 containerd[1690]: 2024-09-04 17:29:26.870 [INFO][4867] k8s.go 387: Calico CNI using IPs: [192.168.76.68/32] ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Namespace="calico-system" Pod="csi-node-driver-b62lq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.890286 containerd[1690]: 2024-09-04 17:29:26.870 [INFO][4867] dataplane_linux.go 68: Setting the host side veth name to calid74373811aa ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Namespace="calico-system" Pod="csi-node-driver-b62lq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.890286 containerd[1690]: 2024-09-04 17:29:26.873 [INFO][4867] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Namespace="calico-system" Pod="csi-node-driver-b62lq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.890286 containerd[1690]: 2024-09-04 17:29:26.873 [INFO][4867] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Namespace="calico-system" Pod="csi-node-driver-b62lq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae313f17-0269-49cd-93a7-cf8ff23b72b7", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60", Pod:"csi-node-driver-b62lq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid74373811aa", MAC:"82:cf:28:9f:78:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:26.890286 containerd[1690]: 2024-09-04 17:29:26.883 [INFO][4867] k8s.go 500: Wrote updated endpoint to datastore ContainerID="46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60" Namespace="calico-system" Pod="csi-node-driver-b62lq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:26.916327 containerd[1690]: time="2024-09-04T17:29:26.916233085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:26.916327 containerd[1690]: time="2024-09-04T17:29:26.916293185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:26.916629 containerd[1690]: time="2024-09-04T17:29:26.916312686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:26.916629 containerd[1690]: time="2024-09-04T17:29:26.916493587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:26.947290 systemd[1]: Started cri-containerd-46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60.scope - libcontainer container 46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60. Sep 4 17:29:26.968506 containerd[1690]: time="2024-09-04T17:29:26.968450693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b62lq,Uid:ae313f17-0269-49cd-93a7-cf8ff23b72b7,Namespace:calico-system,Attempt:1,} returns sandbox id \"46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60\"" Sep 4 17:29:27.187203 systemd-networkd[1554]: cali9fbc1f7cece: Gained IPv6LL Sep 4 17:29:27.532588 kernel: bpftool[4960]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:29:27.699989 systemd-networkd[1554]: cali55878149bae: Gained IPv6LL Sep 4 17:29:27.726397 systemd[1]: run-containerd-runc-k8s.io-46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60-runc.8xW7xo.mount: Deactivated successfully. Sep 4 17:29:28.148942 systemd-networkd[1554]: calid74373811aa: Gained IPv6LL Sep 4 17:29:28.288987 systemd-networkd[1554]: vxlan.calico: Link UP Sep 4 17:29:28.288997 systemd-networkd[1554]: vxlan.calico: Gained carrier Sep 4 17:29:29.284349 containerd[1690]: time="2024-09-04T17:29:29.284298874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:29.288076 containerd[1690]: time="2024-09-04T17:29:29.288027405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:29:29.294228 containerd[1690]: time="2024-09-04T17:29:29.294047555Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:29.301345 containerd[1690]: time="2024-09-04T17:29:29.301198415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:29.302264 containerd[1690]: time="2024-09-04T17:29:29.302123422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.236712894s" Sep 4 17:29:29.302264 containerd[1690]: time="2024-09-04T17:29:29.302164123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:29:29.303607 containerd[1690]: time="2024-09-04T17:29:29.303154431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:29:29.323956 containerd[1690]: time="2024-09-04T17:29:29.323838502Z" level=info msg="CreateContainer within sandbox \"e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:29:29.362985 systemd-networkd[1554]: vxlan.calico: Gained IPv6LL Sep 4 17:29:29.368804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363645758.mount: Deactivated successfully. Sep 4 17:29:29.381784 containerd[1690]: time="2024-09-04T17:29:29.381668582Z" level=info msg="CreateContainer within sandbox \"e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"93056e586331a15324df9cd1432d8ae406eb9ccfe55447fcf5ab1e874f02cd57\"" Sep 4 17:29:29.383384 containerd[1690]: time="2024-09-04T17:29:29.382378388Z" level=info msg="StartContainer for \"93056e586331a15324df9cd1432d8ae406eb9ccfe55447fcf5ab1e874f02cd57\"" Sep 4 17:29:29.414585 systemd[1]: Started cri-containerd-93056e586331a15324df9cd1432d8ae406eb9ccfe55447fcf5ab1e874f02cd57.scope - libcontainer container 93056e586331a15324df9cd1432d8ae406eb9ccfe55447fcf5ab1e874f02cd57. Sep 4 17:29:29.457497 containerd[1690]: time="2024-09-04T17:29:29.457454111Z" level=info msg="StartContainer for \"93056e586331a15324df9cd1432d8ae406eb9ccfe55447fcf5ab1e874f02cd57\" returns successfully" Sep 4 17:29:29.836169 kubelet[3182]: I0904 17:29:29.836013 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c4b665f85-mmtlg" podStartSLOduration=29.596428633 podCreationTimestamp="2024-09-04 17:28:57 +0000 UTC" firstStartedPulling="2024-09-04 17:29:26.063221711 +0000 UTC m=+48.537465011" lastFinishedPulling="2024-09-04 17:29:29.302727027 +0000 UTC m=+51.776970327" observedRunningTime="2024-09-04 17:29:29.833373328 +0000 UTC m=+52.307616728" watchObservedRunningTime="2024-09-04 17:29:29.835933949 +0000 UTC m=+52.310177349" Sep 4 17:29:30.605827 containerd[1690]: time="2024-09-04T17:29:30.605738091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:30.613013 containerd[1690]: time="2024-09-04T17:29:30.612945954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:29:30.618199 containerd[1690]: time="2024-09-04T17:29:30.618144100Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:30.623031 containerd[1690]: time="2024-09-04T17:29:30.622981642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:30.624196 containerd[1690]: time="2024-09-04T17:29:30.623629348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.320434717s" Sep 4 17:29:30.624196 containerd[1690]: time="2024-09-04T17:29:30.623668548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:29:30.625681 containerd[1690]: time="2024-09-04T17:29:30.625646065Z" level=info msg="CreateContainer within sandbox \"46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:29:30.668225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530887766.mount: Deactivated successfully. Sep 4 17:29:30.671505 containerd[1690]: time="2024-09-04T17:29:30.671446767Z" level=info msg="CreateContainer within sandbox \"46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"de076218f8d353f77c05bd0eb6ae4ef10f640f9ead88408288952c94c1c37866\"" Sep 4 17:29:30.672235 containerd[1690]: time="2024-09-04T17:29:30.672203974Z" level=info msg="StartContainer for \"de076218f8d353f77c05bd0eb6ae4ef10f640f9ead88408288952c94c1c37866\"" Sep 4 17:29:30.710015 systemd[1]: Started cri-containerd-de076218f8d353f77c05bd0eb6ae4ef10f640f9ead88408288952c94c1c37866.scope - libcontainer container de076218f8d353f77c05bd0eb6ae4ef10f640f9ead88408288952c94c1c37866. Sep 4 17:29:30.752596 containerd[1690]: time="2024-09-04T17:29:30.752443077Z" level=info msg="StartContainer for \"de076218f8d353f77c05bd0eb6ae4ef10f640f9ead88408288952c94c1c37866\" returns successfully" Sep 4 17:29:30.754216 containerd[1690]: time="2024-09-04T17:29:30.754193092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:29:32.235774 containerd[1690]: time="2024-09-04T17:29:32.235724177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:32.239133 containerd[1690]: time="2024-09-04T17:29:32.239052406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:29:32.245511 containerd[1690]: time="2024-09-04T17:29:32.245331461Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:32.251299 containerd[1690]: time="2024-09-04T17:29:32.251228513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:32.252496 containerd[1690]: time="2024-09-04T17:29:32.251951419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.497576525s" Sep 4 17:29:32.252496 containerd[1690]: time="2024-09-04T17:29:32.251988719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:29:32.254115 containerd[1690]: time="2024-09-04T17:29:32.254080638Z" level=info msg="CreateContainer within sandbox \"46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:29:32.317644 containerd[1690]: time="2024-09-04T17:29:32.317597194Z" level=info msg="CreateContainer within sandbox \"46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"557df62d92b1035cf0ef0d9c03121e5b3aa2cb011591905450115b0b654cf9a0\"" Sep 4 17:29:32.321899 containerd[1690]: time="2024-09-04T17:29:32.319744113Z" level=info msg="StartContainer for \"557df62d92b1035cf0ef0d9c03121e5b3aa2cb011591905450115b0b654cf9a0\"" Sep 4 17:29:32.373602 systemd[1]: run-containerd-runc-k8s.io-557df62d92b1035cf0ef0d9c03121e5b3aa2cb011591905450115b0b654cf9a0-runc.GfexVo.mount: Deactivated successfully. Sep 4 17:29:32.384484 systemd[1]: Started cri-containerd-557df62d92b1035cf0ef0d9c03121e5b3aa2cb011591905450115b0b654cf9a0.scope - libcontainer container 557df62d92b1035cf0ef0d9c03121e5b3aa2cb011591905450115b0b654cf9a0. Sep 4 17:29:32.416313 containerd[1690]: time="2024-09-04T17:29:32.416273959Z" level=info msg="StartContainer for \"557df62d92b1035cf0ef0d9c03121e5b3aa2cb011591905450115b0b654cf9a0\" returns successfully" Sep 4 17:29:32.721832 kubelet[3182]: I0904 17:29:32.721602 3182 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:29:32.721832 kubelet[3182]: I0904 17:29:32.721640 3182 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:29:32.842553 kubelet[3182]: I0904 17:29:32.842480 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-b62lq" podStartSLOduration=30.559681374 podCreationTimestamp="2024-09-04 17:28:57 +0000 UTC" firstStartedPulling="2024-09-04 17:29:26.969480101 +0000 UTC m=+49.443723401" lastFinishedPulling="2024-09-04 17:29:32.252217921 +0000 UTC m=+54.726461321" observedRunningTime="2024-09-04 17:29:32.842289293 +0000 UTC m=+55.316532593" watchObservedRunningTime="2024-09-04 17:29:32.842419294 +0000 UTC m=+55.316662694" Sep 4 17:29:37.613230 containerd[1690]: time="2024-09-04T17:29:37.612837575Z" level=info msg="StopPodSandbox for \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\"" Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.645 [WARNING][5200] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae313f17-0269-49cd-93a7-cf8ff23b72b7", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60", Pod:"csi-node-driver-b62lq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid74373811aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.645 [INFO][5200] k8s.go 608: Cleaning up netns ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.645 [INFO][5200] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" iface="eth0" netns="" Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.645 [INFO][5200] k8s.go 615: Releasing IP address(es) ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.645 [INFO][5200] utils.go 188: Calico CNI releasing IP address ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.663 [INFO][5207] ipam_plugin.go 417: Releasing address using handleID ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.663 [INFO][5207] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.663 [INFO][5207] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.668 [WARNING][5207] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.668 [INFO][5207] ipam_plugin.go 445: Releasing address using workloadID ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.670 [INFO][5207] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:37.671739 containerd[1690]: 2024-09-04 17:29:37.670 [INFO][5200] k8s.go 621: Teardown processing complete. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:37.672375 containerd[1690]: time="2024-09-04T17:29:37.671786367Z" level=info msg="TearDown network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\" successfully" Sep 4 17:29:37.672375 containerd[1690]: time="2024-09-04T17:29:37.671836668Z" level=info msg="StopPodSandbox for \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\" returns successfully" Sep 4 17:29:37.672591 containerd[1690]: time="2024-09-04T17:29:37.672562974Z" level=info msg="RemovePodSandbox for \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\"" Sep 4 17:29:37.672748 containerd[1690]: time="2024-09-04T17:29:37.672603374Z" level=info msg="Forcibly stopping sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\"" Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.705 [WARNING][5225] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae313f17-0269-49cd-93a7-cf8ff23b72b7", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"46d78aaf53f2b50dbfd94752bc469623c23dcd792d0d08310a376592e7cc8e60", Pod:"csi-node-driver-b62lq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.76.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid74373811aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.705 [INFO][5225] k8s.go 608: Cleaning up netns ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.705 [INFO][5225] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" iface="eth0" netns="" Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.705 [INFO][5225] k8s.go 615: Releasing IP address(es) ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.705 [INFO][5225] utils.go 188: Calico CNI releasing IP address ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.723 [INFO][5231] ipam_plugin.go 417: Releasing address using handleID ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.723 [INFO][5231] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.723 [INFO][5231] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.727 [WARNING][5231] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.727 [INFO][5231] ipam_plugin.go 445: Releasing address using workloadID ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" HandleID="k8s-pod-network.90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Workload="ci--3975.2.1--a--1f7e34d344-k8s-csi--node--driver--b62lq-eth0" Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.728 [INFO][5231] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:37.731528 containerd[1690]: 2024-09-04 17:29:37.730 [INFO][5225] k8s.go 621: Teardown processing complete. ContainerID="90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7" Sep 4 17:29:37.731528 containerd[1690]: time="2024-09-04T17:29:37.731400365Z" level=info msg="TearDown network for sandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\" successfully" Sep 4 17:29:37.742336 containerd[1690]: time="2024-09-04T17:29:37.742282456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:29:37.742448 containerd[1690]: time="2024-09-04T17:29:37.742350656Z" level=info msg="RemovePodSandbox \"90a599bdaaeb8d8886989ddc72cff23a183a1bcab21c20cf1345f08a7d54f0e7\" returns successfully" Sep 4 17:29:37.742788 containerd[1690]: time="2024-09-04T17:29:37.742698759Z" level=info msg="StopPodSandbox for \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\"" Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.770 [WARNING][5249] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"40c76adf-c41f-4f0d-a3b3-98fb776074be", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1", Pod:"coredns-5dd5756b68-fnc7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9adc739ea14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.770 [INFO][5249] k8s.go 608: Cleaning up netns ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.771 [INFO][5249] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" iface="eth0" netns="" Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.771 [INFO][5249] k8s.go 615: Releasing IP address(es) ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.771 [INFO][5249] utils.go 188: Calico CNI releasing IP address ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.787 [INFO][5255] ipam_plugin.go 417: Releasing address using handleID ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.787 [INFO][5255] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.787 [INFO][5255] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.793 [WARNING][5255] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.793 [INFO][5255] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.794 [INFO][5255] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:37.796694 containerd[1690]: 2024-09-04 17:29:37.795 [INFO][5249] k8s.go 621: Teardown processing complete. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:37.796694 containerd[1690]: time="2024-09-04T17:29:37.796639410Z" level=info msg="TearDown network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\" successfully" Sep 4 17:29:37.796694 containerd[1690]: time="2024-09-04T17:29:37.796671410Z" level=info msg="StopPodSandbox for \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\" returns successfully" Sep 4 17:29:37.797695 containerd[1690]: time="2024-09-04T17:29:37.797141214Z" level=info msg="RemovePodSandbox for \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\"" Sep 4 17:29:37.797695 containerd[1690]: time="2024-09-04T17:29:37.797171114Z" level=info msg="Forcibly stopping sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\"" Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.827 [WARNING][5274] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"40c76adf-c41f-4f0d-a3b3-98fb776074be", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"405264857b818d527340fd26224cdd7118bae44603ba42cc972b91b5a96ec3d1", Pod:"coredns-5dd5756b68-fnc7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9adc739ea14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.827 [INFO][5274] k8s.go 608: Cleaning up netns ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.827 [INFO][5274] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" iface="eth0" netns="" Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.827 [INFO][5274] k8s.go 615: Releasing IP address(es) ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.827 [INFO][5274] utils.go 188: Calico CNI releasing IP address ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.846 [INFO][5280] ipam_plugin.go 417: Releasing address using handleID ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.847 [INFO][5280] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.847 [INFO][5280] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.856 [WARNING][5280] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.856 [INFO][5280] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" HandleID="k8s-pod-network.6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--fnc7v-eth0" Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.858 [INFO][5280] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:37.861020 containerd[1690]: 2024-09-04 17:29:37.860 [INFO][5274] k8s.go 621: Teardown processing complete. ContainerID="6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f" Sep 4 17:29:37.861880 containerd[1690]: time="2024-09-04T17:29:37.861040847Z" level=info msg="TearDown network for sandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\" successfully" Sep 4 17:29:37.878019 containerd[1690]: time="2024-09-04T17:29:37.877740586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:29:37.878019 containerd[1690]: time="2024-09-04T17:29:37.877990688Z" level=info msg="RemovePodSandbox \"6704cc6c0e5d583d5c2df72b89feb37756d6cbb8a714321e0f9fda5116f1b44f\" returns successfully" Sep 4 17:29:37.878834 containerd[1690]: time="2024-09-04T17:29:37.878497392Z" level=info msg="StopPodSandbox for \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\"" Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.909 [WARNING][5299] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0", GenerateName:"calico-kube-controllers-c4b665f85-", Namespace:"calico-system", SelfLink:"", UID:"5c37808a-484d-4381-8069-9b46cdacb5ee", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4b665f85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599", Pod:"calico-kube-controllers-c4b665f85-mmtlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55878149bae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.909 [INFO][5299] k8s.go 608: Cleaning up netns ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.909 [INFO][5299] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" iface="eth0" netns="" Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.909 [INFO][5299] k8s.go 615: Releasing IP address(es) ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.909 [INFO][5299] utils.go 188: Calico CNI releasing IP address ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.927 [INFO][5305] ipam_plugin.go 417: Releasing address using handleID ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.927 [INFO][5305] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.927 [INFO][5305] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.932 [WARNING][5305] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.932 [INFO][5305] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.933 [INFO][5305] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:37.935399 containerd[1690]: 2024-09-04 17:29:37.934 [INFO][5299] k8s.go 621: Teardown processing complete. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:37.936333 containerd[1690]: time="2024-09-04T17:29:37.935439266Z" level=info msg="TearDown network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\" successfully" Sep 4 17:29:37.936333 containerd[1690]: time="2024-09-04T17:29:37.935467266Z" level=info msg="StopPodSandbox for \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\" returns successfully" Sep 4 17:29:37.936333 containerd[1690]: time="2024-09-04T17:29:37.935909969Z" level=info msg="RemovePodSandbox for \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\"" Sep 4 17:29:37.936333 containerd[1690]: time="2024-09-04T17:29:37.935943570Z" level=info msg="Forcibly stopping sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\"" Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.964 [WARNING][5323] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0", GenerateName:"calico-kube-controllers-c4b665f85-", Namespace:"calico-system", SelfLink:"", UID:"5c37808a-484d-4381-8069-9b46cdacb5ee", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4b665f85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"e76c311e05a40da0d4be57156b1d080a81eaee542c5f7126496a0709f848d599", Pod:"calico-kube-controllers-c4b665f85-mmtlg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55878149bae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.964 [INFO][5323] k8s.go 608: Cleaning up netns ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.964 [INFO][5323] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" iface="eth0" netns="" Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.964 [INFO][5323] k8s.go 615: Releasing IP address(es) ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.964 [INFO][5323] utils.go 188: Calico CNI releasing IP address ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.982 [INFO][5329] ipam_plugin.go 417: Releasing address using handleID ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.982 [INFO][5329] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.982 [INFO][5329] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.987 [WARNING][5329] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.987 [INFO][5329] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" HandleID="k8s-pod-network.a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--kube--controllers--c4b665f85--mmtlg-eth0" Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.988 [INFO][5329] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:37.990577 containerd[1690]: 2024-09-04 17:29:37.989 [INFO][5323] k8s.go 621: Teardown processing complete. ContainerID="a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e" Sep 4 17:29:37.991418 containerd[1690]: time="2024-09-04T17:29:37.990635224Z" level=info msg="TearDown network for sandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\" successfully" Sep 4 17:29:38.001092 containerd[1690]: time="2024-09-04T17:29:38.001053311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:29:38.001197 containerd[1690]: time="2024-09-04T17:29:38.001112012Z" level=info msg="RemovePodSandbox \"a055e3a64457d3ac387da6475ea9e09df18df4f6c4da1321ac59ad1741a75a1e\" returns successfully" Sep 4 17:29:38.001647 containerd[1690]: time="2024-09-04T17:29:38.001614316Z" level=info msg="StopPodSandbox for \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\"" Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.030 [WARNING][5347] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4c2647e3-902b-4afd-82a0-7d9247354ab6", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336", Pod:"coredns-5dd5756b68-dhgqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fbc1f7cece", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.030 [INFO][5347] k8s.go 608: Cleaning up netns ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.030 [INFO][5347] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" iface="eth0" netns="" Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.030 [INFO][5347] k8s.go 615: Releasing IP address(es) ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.030 [INFO][5347] utils.go 188: Calico CNI releasing IP address ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.050 [INFO][5353] ipam_plugin.go 417: Releasing address using handleID ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.050 [INFO][5353] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.050 [INFO][5353] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.055 [WARNING][5353] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.055 [INFO][5353] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.056 [INFO][5353] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:38.058256 containerd[1690]: 2024-09-04 17:29:38.057 [INFO][5347] k8s.go 621: Teardown processing complete. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:38.058932 containerd[1690]: time="2024-09-04T17:29:38.058295287Z" level=info msg="TearDown network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\" successfully" Sep 4 17:29:38.058932 containerd[1690]: time="2024-09-04T17:29:38.058320787Z" level=info msg="StopPodSandbox for \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\" returns successfully" Sep 4 17:29:38.058932 containerd[1690]: time="2024-09-04T17:29:38.058829391Z" level=info msg="RemovePodSandbox for \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\"" Sep 4 17:29:38.058932 containerd[1690]: time="2024-09-04T17:29:38.058879292Z" level=info msg="Forcibly stopping sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\"" Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.088 [WARNING][5371] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"4c2647e3-902b-4afd-82a0-7d9247354ab6", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 28, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"229b0d69bc37e6bac1dce77107e8c318b529dec7b28b493edd0c35509af80336", Pod:"coredns-5dd5756b68-dhgqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fbc1f7cece", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.088 [INFO][5371] k8s.go 608: Cleaning up netns ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.088 [INFO][5371] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" iface="eth0" netns="" Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.088 [INFO][5371] k8s.go 615: Releasing IP address(es) ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.088 [INFO][5371] utils.go 188: Calico CNI releasing IP address ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.106 [INFO][5377] ipam_plugin.go 417: Releasing address using handleID ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.106 [INFO][5377] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.106 [INFO][5377] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.111 [WARNING][5377] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.111 [INFO][5377] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" HandleID="k8s-pod-network.c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Workload="ci--3975.2.1--a--1f7e34d344-k8s-coredns--5dd5756b68--dhgqf-eth0" Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.112 [INFO][5377] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:38.114725 containerd[1690]: 2024-09-04 17:29:38.113 [INFO][5371] k8s.go 621: Teardown processing complete. ContainerID="c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b" Sep 4 17:29:38.115613 containerd[1690]: time="2024-09-04T17:29:38.114749156Z" level=info msg="TearDown network for sandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\" successfully" Sep 4 17:29:38.125574 containerd[1690]: time="2024-09-04T17:29:38.125534446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:29:38.125663 containerd[1690]: time="2024-09-04T17:29:38.125606146Z" level=info msg="RemovePodSandbox \"c6f7a60892dfb26b09770a37a8c34af27c549daa0529585824862aea23baa25b\" returns successfully" Sep 4 17:29:51.762454 kubelet[3182]: I0904 17:29:51.762237 3182 topology_manager.go:215] "Topology Admit Handler" podUID="79341494-9349-4491-bd94-8f5c87e42a24" podNamespace="calico-apiserver" podName="calico-apiserver-58b48b8b49-tlrhv" Sep 4 17:29:51.772983 systemd[1]: Created slice kubepods-besteffort-pod79341494_9349_4491_bd94_8f5c87e42a24.slice - libcontainer container kubepods-besteffort-pod79341494_9349_4491_bd94_8f5c87e42a24.slice. Sep 4 17:29:51.785061 kubelet[3182]: I0904 17:29:51.784510 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/79341494-9349-4491-bd94-8f5c87e42a24-calico-apiserver-certs\") pod \"calico-apiserver-58b48b8b49-tlrhv\" (UID: \"79341494-9349-4491-bd94-8f5c87e42a24\") " pod="calico-apiserver/calico-apiserver-58b48b8b49-tlrhv" Sep 4 17:29:51.785061 kubelet[3182]: I0904 17:29:51.784562 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jtn\" (UniqueName: \"kubernetes.io/projected/79341494-9349-4491-bd94-8f5c87e42a24-kube-api-access-h2jtn\") pod \"calico-apiserver-58b48b8b49-tlrhv\" (UID: \"79341494-9349-4491-bd94-8f5c87e42a24\") " pod="calico-apiserver/calico-apiserver-58b48b8b49-tlrhv" Sep 4 17:29:51.789867 kubelet[3182]: I0904 17:29:51.788513 3182 topology_manager.go:215] "Topology Admit Handler" podUID="4076f9aa-00a7-489c-9620-e11588bda056" podNamespace="calico-apiserver" podName="calico-apiserver-58b48b8b49-bx7cq" Sep 4 17:29:51.796916 systemd[1]: Created slice kubepods-besteffort-pod4076f9aa_00a7_489c_9620_e11588bda056.slice - libcontainer container kubepods-besteffort-pod4076f9aa_00a7_489c_9620_e11588bda056.slice. Sep 4 17:29:51.885218 kubelet[3182]: I0904 17:29:51.885178 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4076f9aa-00a7-489c-9620-e11588bda056-calico-apiserver-certs\") pod \"calico-apiserver-58b48b8b49-bx7cq\" (UID: \"4076f9aa-00a7-489c-9620-e11588bda056\") " pod="calico-apiserver/calico-apiserver-58b48b8b49-bx7cq" Sep 4 17:29:51.885369 kubelet[3182]: I0904 17:29:51.885234 3182 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmxt\" (UniqueName: \"kubernetes.io/projected/4076f9aa-00a7-489c-9620-e11588bda056-kube-api-access-zbmxt\") pod \"calico-apiserver-58b48b8b49-bx7cq\" (UID: \"4076f9aa-00a7-489c-9620-e11588bda056\") " pod="calico-apiserver/calico-apiserver-58b48b8b49-bx7cq" Sep 4 17:29:51.885369 kubelet[3182]: E0904 17:29:51.885355 3182 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:29:51.885466 kubelet[3182]: E0904 17:29:51.885415 3182 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79341494-9349-4491-bd94-8f5c87e42a24-calico-apiserver-certs podName:79341494-9349-4491-bd94-8f5c87e42a24 nodeName:}" failed. No retries permitted until 2024-09-04 17:29:52.385394229 +0000 UTC m=+74.859637629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/79341494-9349-4491-bd94-8f5c87e42a24-calico-apiserver-certs") pod "calico-apiserver-58b48b8b49-tlrhv" (UID: "79341494-9349-4491-bd94-8f5c87e42a24") : secret "calico-apiserver-certs" not found Sep 4 17:29:51.986275 kubelet[3182]: E0904 17:29:51.986093 3182 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:29:51.986275 kubelet[3182]: E0904 17:29:51.986156 3182 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4076f9aa-00a7-489c-9620-e11588bda056-calico-apiserver-certs podName:4076f9aa-00a7-489c-9620-e11588bda056 nodeName:}" failed. No retries permitted until 2024-09-04 17:29:52.486138151 +0000 UTC m=+74.960381551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/4076f9aa-00a7-489c-9620-e11588bda056-calico-apiserver-certs") pod "calico-apiserver-58b48b8b49-bx7cq" (UID: "4076f9aa-00a7-489c-9620-e11588bda056") : secret "calico-apiserver-certs" not found Sep 4 17:29:52.678996 containerd[1690]: time="2024-09-04T17:29:52.678940906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b48b8b49-tlrhv,Uid:79341494-9349-4491-bd94-8f5c87e42a24,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:29:52.705885 containerd[1690]: time="2024-09-04T17:29:52.704115712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b48b8b49-bx7cq,Uid:4076f9aa-00a7-489c-9620-e11588bda056,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:29:52.834692 systemd-networkd[1554]: cali5f86f228c45: Link UP Sep 4 17:29:52.836396 systemd-networkd[1554]: cali5f86f228c45: Gained carrier Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.747 [INFO][5462] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0 calico-apiserver-58b48b8b49- calico-apiserver 79341494-9349-4491-bd94-8f5c87e42a24 848 0 2024-09-04 17:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58b48b8b49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-a-1f7e34d344 calico-apiserver-58b48b8b49-tlrhv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f86f228c45 [] []}} ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-tlrhv" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.748 [INFO][5462] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-tlrhv" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.788 [INFO][5483] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" HandleID="k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.797 [INFO][5483] ipam_plugin.go 270: Auto assigning IP ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" HandleID="k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-a-1f7e34d344", "pod":"calico-apiserver-58b48b8b49-tlrhv", "timestamp":"2024-09-04 17:29:52.7884008 +0000 UTC"}, Hostname:"ci-3975.2.1-a-1f7e34d344", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.797 [INFO][5483] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.797 [INFO][5483] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.797 [INFO][5483] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-1f7e34d344' Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.799 [INFO][5483] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.803 [INFO][5483] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.808 [INFO][5483] ipam.go 489: Trying affinity for 192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.810 [INFO][5483] ipam.go 155: Attempting to load block cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.813 [INFO][5483] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.813 [INFO][5483] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.815 [INFO][5483] ipam.go 1685: Creating new handle: k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2 Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.819 [INFO][5483] ipam.go 1203: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.826 [INFO][5483] ipam.go 1216: Successfully claimed IPs: [192.168.76.69/26] block=192.168.76.64/26 handle="k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.826 [INFO][5483] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.69/26] handle="k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.826 [INFO][5483] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:52.850409 containerd[1690]: 2024-09-04 17:29:52.826 [INFO][5483] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.76.69/26] IPv6=[] ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" HandleID="k8s-pod-network.04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" Sep 4 17:29:52.851455 containerd[1690]: 2024-09-04 17:29:52.828 [INFO][5462] k8s.go 386: Populated endpoint ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-tlrhv" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0", GenerateName:"calico-apiserver-58b48b8b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"79341494-9349-4491-bd94-8f5c87e42a24", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b48b8b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"", Pod:"calico-apiserver-58b48b8b49-tlrhv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f86f228c45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:52.851455 containerd[1690]: 2024-09-04 17:29:52.829 [INFO][5462] k8s.go 387: Calico CNI using IPs: [192.168.76.69/32] ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-tlrhv" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" Sep 4 17:29:52.851455 containerd[1690]: 2024-09-04 17:29:52.830 [INFO][5462] dataplane_linux.go 68: Setting the host side veth name to cali5f86f228c45 ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-tlrhv" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" Sep 4 17:29:52.851455 containerd[1690]: 2024-09-04 17:29:52.831 [INFO][5462] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-tlrhv" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" Sep 4 17:29:52.851455 containerd[1690]: 2024-09-04 17:29:52.832 [INFO][5462] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-tlrhv" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0", GenerateName:"calico-apiserver-58b48b8b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"79341494-9349-4491-bd94-8f5c87e42a24", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b48b8b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2", Pod:"calico-apiserver-58b48b8b49-tlrhv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f86f228c45", MAC:"f2:7d:b5:c4:40:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:52.851455 containerd[1690]: 2024-09-04 17:29:52.845 [INFO][5462] k8s.go 500: Wrote updated endpoint to datastore ContainerID="04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-tlrhv" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--tlrhv-eth0" Sep 4 17:29:52.897590 containerd[1690]: time="2024-09-04T17:29:52.897251788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:52.897590 containerd[1690]: time="2024-09-04T17:29:52.897313289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:52.897590 containerd[1690]: time="2024-09-04T17:29:52.897338489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:52.897590 containerd[1690]: time="2024-09-04T17:29:52.897358289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:52.946019 systemd[1]: Started cri-containerd-04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2.scope - libcontainer container 04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2. Sep 4 17:29:52.954940 systemd-networkd[1554]: cali06dc011e732: Link UP Sep 4 17:29:52.958128 systemd-networkd[1554]: cali06dc011e732: Gained carrier Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.790 [INFO][5477] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0 calico-apiserver-58b48b8b49- calico-apiserver 4076f9aa-00a7-489c-9620-e11588bda056 852 0 2024-09-04 17:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58b48b8b49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-a-1f7e34d344 calico-apiserver-58b48b8b49-bx7cq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06dc011e732 [] []}} ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-bx7cq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.790 [INFO][5477] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-bx7cq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.830 [INFO][5493] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" HandleID="k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.858 [INFO][5493] ipam_plugin.go 270: Auto assigning IP ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" HandleID="k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318f00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-a-1f7e34d344", "pod":"calico-apiserver-58b48b8b49-bx7cq", "timestamp":"2024-09-04 17:29:52.830435743 +0000 UTC"}, Hostname:"ci-3975.2.1-a-1f7e34d344", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.859 [INFO][5493] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.859 [INFO][5493] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.859 [INFO][5493] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-1f7e34d344' Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.873 [INFO][5493] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.881 [INFO][5493] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.892 [INFO][5493] ipam.go 489: Trying affinity for 192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.895 [INFO][5493] ipam.go 155: Attempting to load block cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.899 [INFO][5493] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.899 [INFO][5493] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.907 [INFO][5493] ipam.go 1685: Creating new handle: k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0 Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.917 [INFO][5493] ipam.go 1203: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.933 [INFO][5493] ipam.go 1216: Successfully claimed IPs: [192.168.76.70/26] block=192.168.76.64/26 handle="k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.933 [INFO][5493] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.76.70/26] handle="k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" host="ci-3975.2.1-a-1f7e34d344" Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.934 [INFO][5493] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:52.999144 containerd[1690]: 2024-09-04 17:29:52.934 [INFO][5493] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.76.70/26] IPv6=[] ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" HandleID="k8s-pod-network.34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Workload="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" Sep 4 17:29:53.001265 containerd[1690]: 2024-09-04 17:29:52.937 [INFO][5477] k8s.go 386: Populated endpoint ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-bx7cq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0", GenerateName:"calico-apiserver-58b48b8b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"4076f9aa-00a7-489c-9620-e11588bda056", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b48b8b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"", Pod:"calico-apiserver-58b48b8b49-bx7cq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06dc011e732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:53.001265 containerd[1690]: 2024-09-04 17:29:52.937 [INFO][5477] k8s.go 387: Calico CNI using IPs: [192.168.76.70/32] ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-bx7cq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" Sep 4 17:29:53.001265 containerd[1690]: 2024-09-04 17:29:52.937 [INFO][5477] dataplane_linux.go 68: Setting the host side veth name to cali06dc011e732 ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-bx7cq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" Sep 4 17:29:53.001265 containerd[1690]: 2024-09-04 17:29:52.960 [INFO][5477] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-bx7cq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" Sep 4 17:29:53.001265 containerd[1690]: 2024-09-04 17:29:52.964 [INFO][5477] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-bx7cq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0", GenerateName:"calico-apiserver-58b48b8b49-", Namespace:"calico-apiserver", SelfLink:"", UID:"4076f9aa-00a7-489c-9620-e11588bda056", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b48b8b49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-1f7e34d344", ContainerID:"34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0", Pod:"calico-apiserver-58b48b8b49-bx7cq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06dc011e732", MAC:"aa:e3:a5:27:35:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:53.001265 containerd[1690]: 2024-09-04 17:29:52.994 [INFO][5477] k8s.go 500: Wrote updated endpoint to datastore ContainerID="34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0" Namespace="calico-apiserver" Pod="calico-apiserver-58b48b8b49-bx7cq" WorkloadEndpoint="ci--3975.2.1--a--1f7e34d344-k8s-calico--apiserver--58b48b8b49--bx7cq-eth0" Sep 4 17:29:53.033716 containerd[1690]: time="2024-09-04T17:29:53.033662402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b48b8b49-tlrhv,Uid:79341494-9349-4491-bd94-8f5c87e42a24,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2\"" Sep 4 17:29:53.038515 containerd[1690]: time="2024-09-04T17:29:53.038480641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:29:53.043810 containerd[1690]: time="2024-09-04T17:29:53.043734384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:53.044059 containerd[1690]: time="2024-09-04T17:29:53.043942886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:53.044059 containerd[1690]: time="2024-09-04T17:29:53.044034386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:53.044189 containerd[1690]: time="2024-09-04T17:29:53.044053487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:53.080273 systemd[1]: Started cri-containerd-34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0.scope - libcontainer container 34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0. Sep 4 17:29:53.118769 containerd[1690]: time="2024-09-04T17:29:53.118728796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b48b8b49-bx7cq,Uid:4076f9aa-00a7-489c-9620-e11588bda056,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0\"" Sep 4 17:29:54.579122 systemd-networkd[1554]: cali5f86f228c45: Gained IPv6LL Sep 4 17:29:54.706992 systemd-networkd[1554]: cali06dc011e732: Gained IPv6LL Sep 4 17:29:55.953888 containerd[1690]: time="2024-09-04T17:29:55.953731131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:55.956242 containerd[1690]: time="2024-09-04T17:29:55.956097050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:29:55.960785 containerd[1690]: time="2024-09-04T17:29:55.960636388Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:55.966212 containerd[1690]: time="2024-09-04T17:29:55.966148834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:55.967392 containerd[1690]: time="2024-09-04T17:29:55.966803239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.928015996s" Sep 4 17:29:55.967392 containerd[1690]: time="2024-09-04T17:29:55.966840439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:29:55.968078 containerd[1690]: time="2024-09-04T17:29:55.968047150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:29:55.969126 containerd[1690]: time="2024-09-04T17:29:55.969101358Z" level=info msg="CreateContainer within sandbox \"04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:29:56.011246 containerd[1690]: time="2024-09-04T17:29:56.011217208Z" level=info msg="CreateContainer within sandbox \"04a0bda16512c5c66fbde040dd357d9162bddbd44b8fe8636b6f5a5f74a1c0b2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"edf75e64eb868ebeba82b07afdd82ab1e723f602d100c86f500024ef21a58ad5\"" Sep 4 17:29:56.012008 containerd[1690]: time="2024-09-04T17:29:56.011982414Z" level=info msg="StartContainer for \"edf75e64eb868ebeba82b07afdd82ab1e723f602d100c86f500024ef21a58ad5\"" Sep 4 17:29:56.047020 systemd[1]: Started cri-containerd-edf75e64eb868ebeba82b07afdd82ab1e723f602d100c86f500024ef21a58ad5.scope - libcontainer container edf75e64eb868ebeba82b07afdd82ab1e723f602d100c86f500024ef21a58ad5. Sep 4 17:29:56.088963 containerd[1690]: time="2024-09-04T17:29:56.088877253Z" level=info msg="StartContainer for \"edf75e64eb868ebeba82b07afdd82ab1e723f602d100c86f500024ef21a58ad5\" returns successfully" Sep 4 17:29:56.304883 containerd[1690]: time="2024-09-04T17:29:56.303606836Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:56.307015 containerd[1690]: time="2024-09-04T17:29:56.306970364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 17:29:56.312908 containerd[1690]: time="2024-09-04T17:29:56.312785212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 344.695962ms" Sep 4 17:29:56.313055 containerd[1690]: time="2024-09-04T17:29:56.312917513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:29:56.318865 containerd[1690]: time="2024-09-04T17:29:56.318826162Z" level=info msg="CreateContainer within sandbox \"34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:29:56.359176 containerd[1690]: time="2024-09-04T17:29:56.359069897Z" level=info msg="CreateContainer within sandbox \"34f205fde939237cb711b8f32942df1218d382fce99ac6b9bcc82ff6911bfbc0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d438e42c359b353ab7f83181166347a026a9c4b68e053ead5d58f6a905dfccd9\"" Sep 4 17:29:56.359743 containerd[1690]: time="2024-09-04T17:29:56.359708402Z" level=info msg="StartContainer for \"d438e42c359b353ab7f83181166347a026a9c4b68e053ead5d58f6a905dfccd9\"" Sep 4 17:29:56.396000 systemd[1]: Started cri-containerd-d438e42c359b353ab7f83181166347a026a9c4b68e053ead5d58f6a905dfccd9.scope - libcontainer container d438e42c359b353ab7f83181166347a026a9c4b68e053ead5d58f6a905dfccd9. Sep 4 17:29:56.545126 containerd[1690]: time="2024-09-04T17:29:56.545051441Z" level=info msg="StartContainer for \"d438e42c359b353ab7f83181166347a026a9c4b68e053ead5d58f6a905dfccd9\" returns successfully" Sep 4 17:29:56.931582 kubelet[3182]: I0904 17:29:56.931453 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58b48b8b49-tlrhv" podStartSLOduration=3.001080234 podCreationTimestamp="2024-09-04 17:29:51 +0000 UTC" firstStartedPulling="2024-09-04 17:29:53.037042829 +0000 UTC m=+75.511286129" lastFinishedPulling="2024-09-04 17:29:55.967373844 +0000 UTC m=+78.441617144" observedRunningTime="2024-09-04 17:29:56.915083614 +0000 UTC m=+79.389326914" watchObservedRunningTime="2024-09-04 17:29:56.931411249 +0000 UTC m=+79.405654549" Sep 4 17:29:57.304250 kubelet[3182]: I0904 17:29:57.303060 3182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58b48b8b49-bx7cq" podStartSLOduration=3.109362122 podCreationTimestamp="2024-09-04 17:29:51 +0000 UTC" firstStartedPulling="2024-09-04 17:29:53.119939806 +0000 UTC m=+75.594183206" lastFinishedPulling="2024-09-04 17:29:56.313590519 +0000 UTC m=+78.787833819" observedRunningTime="2024-09-04 17:29:56.951093513 +0000 UTC m=+79.425336813" watchObservedRunningTime="2024-09-04 17:29:57.303012735 +0000 UTC m=+79.777256135" Sep 4 17:30:44.331243 update_engine[1667]: I0904 17:30:44.331180 1667 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 17:30:44.331243 update_engine[1667]: I0904 17:30:44.331234 1667 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 17:30:44.331986 update_engine[1667]: I0904 17:30:44.331461 1667 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 17:30:44.332306 update_engine[1667]: I0904 17:30:44.332274 1667 omaha_request_params.cc:62] Current group set to stable Sep 4 17:30:44.332601 update_engine[1667]: I0904 17:30:44.332429 1667 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 17:30:44.332601 update_engine[1667]: I0904 17:30:44.332445 1667 update_attempter.cc:643] Scheduling an action processor start. Sep 4 17:30:44.332601 update_engine[1667]: I0904 17:30:44.332465 1667 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:30:44.332601 update_engine[1667]: I0904 17:30:44.332506 1667 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 17:30:44.332601 update_engine[1667]: I0904 17:30:44.332590 1667 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:30:44.332601 update_engine[1667]: I0904 17:30:44.332599 1667 omaha_request_action.cc:272] Request: Sep 4 17:30:44.332601 update_engine[1667]: Sep 4 17:30:44.332601 update_engine[1667]: Sep 4 17:30:44.332601 update_engine[1667]: Sep 4 17:30:44.332601 update_engine[1667]: Sep 4 17:30:44.332601 update_engine[1667]: Sep 4 17:30:44.332601 update_engine[1667]: Sep 4 17:30:44.332601 update_engine[1667]: Sep 4 17:30:44.332601 update_engine[1667]: Sep 4 17:30:44.332601 update_engine[1667]: I0904 17:30:44.332605 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:30:44.333312 locksmithd[1763]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 17:30:44.334661 update_engine[1667]: I0904 17:30:44.334626 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:30:44.335091 update_engine[1667]: I0904 17:30:44.335062 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:30:44.406248 update_engine[1667]: E0904 17:30:44.406168 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:30:44.406468 update_engine[1667]: I0904 17:30:44.406340 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 17:30:48.608612 systemd[1]: run-containerd-runc-k8s.io-54b997c9253ae2605a9b8a79d7d9642c288e9afb515c755eb8a61f8503973c3c-runc.nEPYbH.mount: Deactivated successfully. Sep 4 17:30:54.292533 update_engine[1667]: I0904 17:30:54.292433 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:30:54.293490 update_engine[1667]: I0904 17:30:54.292986 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:30:54.293490 update_engine[1667]: I0904 17:30:54.293370 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:30:54.315021 update_engine[1667]: E0904 17:30:54.314975 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:30:54.315120 update_engine[1667]: I0904 17:30:54.315037 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 17:31:04.292541 update_engine[1667]: I0904 17:31:04.292438 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:31:04.293155 update_engine[1667]: I0904 17:31:04.292734 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:31:04.293155 update_engine[1667]: I0904 17:31:04.293056 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:31:04.309148 update_engine[1667]: E0904 17:31:04.309108 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:31:04.309262 update_engine[1667]: I0904 17:31:04.309168 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 4 17:31:08.474347 systemd[1]: run-containerd-runc-k8s.io-93056e586331a15324df9cd1432d8ae406eb9ccfe55447fcf5ab1e874f02cd57-runc.0x9miK.mount: Deactivated successfully. Sep 4 17:31:10.361150 systemd[1]: Started sshd@7-10.200.8.42:22-10.200.16.10:40008.service - OpenSSH per-connection server daemon (10.200.16.10:40008). Sep 4 17:31:10.983531 sshd[5895]: Accepted publickey for core from 10.200.16.10 port 40008 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:10.985313 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:10.990315 systemd-logind[1666]: New session 10 of user core. Sep 4 17:31:10.998179 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:31:11.525864 sshd[5895]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:11.530290 systemd[1]: sshd@7-10.200.8.42:22-10.200.16.10:40008.service: Deactivated successfully. Sep 4 17:31:11.535222 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:31:11.537915 systemd-logind[1666]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:31:11.539807 systemd-logind[1666]: Removed session 10. Sep 4 17:31:14.285997 update_engine[1667]: I0904 17:31:14.285905 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:31:14.287027 update_engine[1667]: I0904 17:31:14.286394 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:31:14.287111 update_engine[1667]: I0904 17:31:14.287041 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:31:14.308977 update_engine[1667]: E0904 17:31:14.308836 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:31:14.309233 update_engine[1667]: I0904 17:31:14.309051 1667 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 17:31:14.309233 update_engine[1667]: I0904 17:31:14.309072 1667 omaha_request_action.cc:617] Omaha request response: Sep 4 17:31:14.309233 update_engine[1667]: E0904 17:31:14.309224 1667 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 4 17:31:14.309381 update_engine[1667]: I0904 17:31:14.309250 1667 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 4 17:31:14.309381 update_engine[1667]: I0904 17:31:14.309257 1667 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:31:14.309381 update_engine[1667]: I0904 17:31:14.309262 1667 update_attempter.cc:306] Processing Done. Sep 4 17:31:14.309381 update_engine[1667]: E0904 17:31:14.309280 1667 update_attempter.cc:619] Update failed. Sep 4 17:31:14.309381 update_engine[1667]: I0904 17:31:14.309288 1667 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 4 17:31:14.309381 update_engine[1667]: I0904 17:31:14.309293 1667 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 4 17:31:14.309381 update_engine[1667]: I0904 17:31:14.309298 1667 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 4 17:31:14.309716 update_engine[1667]: I0904 17:31:14.309396 1667 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:31:14.309716 update_engine[1667]: I0904 17:31:14.309422 1667 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:31:14.309716 update_engine[1667]: I0904 17:31:14.309428 1667 omaha_request_action.cc:272] Request: Sep 4 17:31:14.309716 update_engine[1667]: Sep 4 17:31:14.309716 update_engine[1667]: Sep 4 17:31:14.309716 update_engine[1667]: Sep 4 17:31:14.309716 update_engine[1667]: Sep 4 17:31:14.309716 update_engine[1667]: Sep 4 17:31:14.309716 update_engine[1667]: Sep 4 17:31:14.309716 update_engine[1667]: I0904 17:31:14.309434 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:31:14.309716 update_engine[1667]: I0904 17:31:14.309592 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:31:14.310207 update_engine[1667]: I0904 17:31:14.309805 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:31:14.310264 locksmithd[1763]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 4 17:31:14.325251 update_engine[1667]: E0904 17:31:14.325217 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:31:14.325376 update_engine[1667]: I0904 17:31:14.325270 1667 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 17:31:14.325376 update_engine[1667]: I0904 17:31:14.325279 1667 omaha_request_action.cc:617] Omaha request response: Sep 4 17:31:14.325376 update_engine[1667]: I0904 17:31:14.325286 1667 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:31:14.325376 update_engine[1667]: I0904 17:31:14.325291 1667 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:31:14.325376 update_engine[1667]: I0904 17:31:14.325295 1667 update_attempter.cc:306] Processing Done. Sep 4 17:31:14.325376 update_engine[1667]: I0904 17:31:14.325302 1667 update_attempter.cc:310] Error event sent. Sep 4 17:31:14.325376 update_engine[1667]: I0904 17:31:14.325311 1667 update_check_scheduler.cc:74] Next update check in 42m45s Sep 4 17:31:14.325718 locksmithd[1763]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 4 17:31:16.634078 systemd[1]: Started sshd@8-10.200.8.42:22-10.200.16.10:40012.service - OpenSSH per-connection server daemon (10.200.16.10:40012). Sep 4 17:31:17.297345 sshd[5942]: Accepted publickey for core from 10.200.16.10 port 40012 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:17.299624 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:17.309507 systemd-logind[1666]: New session 11 of user core. Sep 4 17:31:17.314097 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:31:17.788277 sshd[5942]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:17.792920 systemd[1]: sshd@8-10.200.8.42:22-10.200.16.10:40012.service: Deactivated successfully. Sep 4 17:31:17.795442 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:31:17.796480 systemd-logind[1666]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:31:17.798343 systemd-logind[1666]: Removed session 11. Sep 4 17:31:22.897068 systemd[1]: Started sshd@9-10.200.8.42:22-10.200.16.10:46388.service - OpenSSH per-connection server daemon (10.200.16.10:46388). Sep 4 17:31:23.528994 sshd[5979]: Accepted publickey for core from 10.200.16.10 port 46388 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:23.530670 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:23.535580 systemd-logind[1666]: New session 12 of user core. Sep 4 17:31:23.542434 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:31:24.029345 sshd[5979]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:24.032910 systemd[1]: sshd@9-10.200.8.42:22-10.200.16.10:46388.service: Deactivated successfully. Sep 4 17:31:24.035767 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:31:24.038320 systemd-logind[1666]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:31:24.040325 systemd-logind[1666]: Removed session 12. Sep 4 17:31:29.144154 systemd[1]: Started sshd@10-10.200.8.42:22-10.200.16.10:45440.service - OpenSSH per-connection server daemon (10.200.16.10:45440). Sep 4 17:31:29.762516 sshd[5998]: Accepted publickey for core from 10.200.16.10 port 45440 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:29.764298 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:29.769449 systemd-logind[1666]: New session 13 of user core. Sep 4 17:31:29.778003 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:31:30.264373 sshd[5998]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:30.268201 systemd[1]: sshd@10-10.200.8.42:22-10.200.16.10:45440.service: Deactivated successfully. Sep 4 17:31:30.270409 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:31:30.271247 systemd-logind[1666]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:31:30.272279 systemd-logind[1666]: Removed session 13. Sep 4 17:31:35.380130 systemd[1]: Started sshd@11-10.200.8.42:22-10.200.16.10:45456.service - OpenSSH per-connection server daemon (10.200.16.10:45456). Sep 4 17:31:36.002548 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 45456 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:36.003829 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:36.008270 systemd-logind[1666]: New session 14 of user core. Sep 4 17:31:36.013996 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:31:36.497644 sshd[6017]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:36.501052 systemd[1]: sshd@11-10.200.8.42:22-10.200.16.10:45456.service: Deactivated successfully. Sep 4 17:31:36.503817 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:31:36.505819 systemd-logind[1666]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:31:36.506945 systemd-logind[1666]: Removed session 14. Sep 4 17:31:41.617014 systemd[1]: Started sshd@12-10.200.8.42:22-10.200.16.10:51818.service - OpenSSH per-connection server daemon (10.200.16.10:51818). Sep 4 17:31:42.252113 sshd[6051]: Accepted publickey for core from 10.200.16.10 port 51818 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:42.253611 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:42.258247 systemd-logind[1666]: New session 15 of user core. Sep 4 17:31:42.263367 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:31:42.746723 sshd[6051]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:42.750242 systemd[1]: sshd@12-10.200.8.42:22-10.200.16.10:51818.service: Deactivated successfully. Sep 4 17:31:42.753135 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:31:42.755255 systemd-logind[1666]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:31:42.756443 systemd-logind[1666]: Removed session 15. Sep 4 17:31:42.870112 systemd[1]: Started sshd@13-10.200.8.42:22-10.200.16.10:51820.service - OpenSSH per-connection server daemon (10.200.16.10:51820). Sep 4 17:31:43.497970 sshd[6065]: Accepted publickey for core from 10.200.16.10 port 51820 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:43.499624 sshd[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:43.504931 systemd-logind[1666]: New session 16 of user core. Sep 4 17:31:43.515279 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:31:44.609007 sshd[6065]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:44.612576 systemd[1]: sshd@13-10.200.8.42:22-10.200.16.10:51820.service: Deactivated successfully. Sep 4 17:31:44.615485 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:31:44.617387 systemd-logind[1666]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:31:44.618500 systemd-logind[1666]: Removed session 16. Sep 4 17:31:44.720231 systemd[1]: Started sshd@14-10.200.8.42:22-10.200.16.10:51826.service - OpenSSH per-connection server daemon (10.200.16.10:51826). Sep 4 17:31:45.349095 sshd[6081]: Accepted publickey for core from 10.200.16.10 port 51826 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:45.350479 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:45.354973 systemd-logind[1666]: New session 17 of user core. Sep 4 17:31:45.360011 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:31:45.869182 sshd[6081]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:45.872672 systemd[1]: sshd@14-10.200.8.42:22-10.200.16.10:51826.service: Deactivated successfully. Sep 4 17:31:45.875346 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:31:45.877337 systemd-logind[1666]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:31:45.878511 systemd-logind[1666]: Removed session 17. Sep 4 17:31:50.986169 systemd[1]: Started sshd@15-10.200.8.42:22-10.200.16.10:60294.service - OpenSSH per-connection server daemon (10.200.16.10:60294). Sep 4 17:31:51.611466 sshd[6115]: Accepted publickey for core from 10.200.16.10 port 60294 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:51.612922 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:51.618102 systemd-logind[1666]: New session 18 of user core. Sep 4 17:31:51.625103 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:31:52.107329 sshd[6115]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:52.111798 systemd[1]: sshd@15-10.200.8.42:22-10.200.16.10:60294.service: Deactivated successfully. Sep 4 17:31:52.114465 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:31:52.115346 systemd-logind[1666]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:31:52.116448 systemd-logind[1666]: Removed session 18. Sep 4 17:31:57.226163 systemd[1]: Started sshd@16-10.200.8.42:22-10.200.16.10:60304.service - OpenSSH per-connection server daemon (10.200.16.10:60304). Sep 4 17:31:57.846213 sshd[6135]: Accepted publickey for core from 10.200.16.10 port 60304 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:31:57.847656 sshd[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:57.852104 systemd-logind[1666]: New session 19 of user core. Sep 4 17:31:57.860991 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:31:58.358181 sshd[6135]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:58.361961 systemd[1]: sshd@16-10.200.8.42:22-10.200.16.10:60304.service: Deactivated successfully. Sep 4 17:31:58.364756 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:31:58.366897 systemd-logind[1666]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:31:58.368022 systemd-logind[1666]: Removed session 19. Sep 4 17:32:03.475241 systemd[1]: Started sshd@17-10.200.8.42:22-10.200.16.10:33520.service - OpenSSH per-connection server daemon (10.200.16.10:33520). Sep 4 17:32:04.099724 sshd[6152]: Accepted publickey for core from 10.200.16.10 port 33520 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:04.101486 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:04.107426 systemd-logind[1666]: New session 20 of user core. Sep 4 17:32:04.112005 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:32:04.605844 sshd[6152]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:04.608624 systemd[1]: sshd@17-10.200.8.42:22-10.200.16.10:33520.service: Deactivated successfully. Sep 4 17:32:04.610944 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:32:04.612722 systemd-logind[1666]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:32:04.613734 systemd-logind[1666]: Removed session 20. Sep 4 17:32:08.469749 systemd[1]: run-containerd-runc-k8s.io-93056e586331a15324df9cd1432d8ae406eb9ccfe55447fcf5ab1e874f02cd57-runc.245FmF.mount: Deactivated successfully. Sep 4 17:32:09.723189 systemd[1]: Started sshd@18-10.200.8.42:22-10.200.16.10:48772.service - OpenSSH per-connection server daemon (10.200.16.10:48772). Sep 4 17:32:10.344103 sshd[6195]: Accepted publickey for core from 10.200.16.10 port 48772 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:10.345553 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:10.350158 systemd-logind[1666]: New session 21 of user core. Sep 4 17:32:10.354010 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:32:10.841954 sshd[6195]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:10.844932 systemd[1]: sshd@18-10.200.8.42:22-10.200.16.10:48772.service: Deactivated successfully. Sep 4 17:32:10.847101 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:32:10.848762 systemd-logind[1666]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:32:10.849995 systemd-logind[1666]: Removed session 21. Sep 4 17:32:15.957145 systemd[1]: Started sshd@19-10.200.8.42:22-10.200.16.10:48778.service - OpenSSH per-connection server daemon (10.200.16.10:48778). Sep 4 17:32:16.579792 sshd[6232]: Accepted publickey for core from 10.200.16.10 port 48778 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:16.581310 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:16.590612 systemd-logind[1666]: New session 22 of user core. Sep 4 17:32:16.593016 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:32:17.132627 sshd[6232]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:17.136890 systemd-logind[1666]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:32:17.137907 systemd[1]: sshd@19-10.200.8.42:22-10.200.16.10:48778.service: Deactivated successfully. Sep 4 17:32:17.141398 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:32:17.143095 systemd-logind[1666]: Removed session 22. Sep 4 17:32:17.245422 systemd[1]: Started sshd@20-10.200.8.42:22-10.200.16.10:48792.service - OpenSSH per-connection server daemon (10.200.16.10:48792). Sep 4 17:32:17.862169 sshd[6248]: Accepted publickey for core from 10.200.16.10 port 48792 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:17.863590 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:17.867907 systemd-logind[1666]: New session 23 of user core. Sep 4 17:32:17.873242 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:32:18.426735 sshd[6248]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:18.431397 systemd[1]: sshd@20-10.200.8.42:22-10.200.16.10:48792.service: Deactivated successfully. Sep 4 17:32:18.433499 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:32:18.434439 systemd-logind[1666]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:32:18.435488 systemd-logind[1666]: Removed session 23. Sep 4 17:32:18.542340 systemd[1]: Started sshd@21-10.200.8.42:22-10.200.16.10:49710.service - OpenSSH per-connection server daemon (10.200.16.10:49710). Sep 4 17:32:19.162002 sshd[6259]: Accepted publickey for core from 10.200.16.10 port 49710 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:19.163526 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:19.168140 systemd-logind[1666]: New session 24 of user core. Sep 4 17:32:19.174033 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:32:20.437961 sshd[6259]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:20.442814 systemd[1]: sshd@21-10.200.8.42:22-10.200.16.10:49710.service: Deactivated successfully. Sep 4 17:32:20.445200 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:32:20.446193 systemd-logind[1666]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:32:20.447354 systemd-logind[1666]: Removed session 24. Sep 4 17:32:20.555378 systemd[1]: Started sshd@22-10.200.8.42:22-10.200.16.10:49718.service - OpenSSH per-connection server daemon (10.200.16.10:49718). Sep 4 17:32:21.176041 sshd[6298]: Accepted publickey for core from 10.200.16.10 port 49718 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:21.179691 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:21.185163 systemd-logind[1666]: New session 25 of user core. Sep 4 17:32:21.191029 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:32:21.867532 sshd[6298]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:21.872291 systemd[1]: sshd@22-10.200.8.42:22-10.200.16.10:49718.service: Deactivated successfully. Sep 4 17:32:21.874695 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:32:21.875636 systemd-logind[1666]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:32:21.876728 systemd-logind[1666]: Removed session 25. Sep 4 17:32:21.983470 systemd[1]: Started sshd@23-10.200.8.42:22-10.200.16.10:49728.service - OpenSSH per-connection server daemon (10.200.16.10:49728). Sep 4 17:32:22.601127 sshd[6311]: Accepted publickey for core from 10.200.16.10 port 49728 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:22.602913 sshd[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:22.608049 systemd-logind[1666]: New session 26 of user core. Sep 4 17:32:22.616002 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:32:23.092890 sshd[6311]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:23.096697 systemd[1]: sshd@23-10.200.8.42:22-10.200.16.10:49728.service: Deactivated successfully. Sep 4 17:32:23.099058 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:32:23.099876 systemd-logind[1666]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:32:23.100935 systemd-logind[1666]: Removed session 26. Sep 4 17:32:28.206642 systemd[1]: Started sshd@24-10.200.8.42:22-10.200.16.10:49734.service - OpenSSH per-connection server daemon (10.200.16.10:49734). Sep 4 17:32:28.833692 sshd[6336]: Accepted publickey for core from 10.200.16.10 port 49734 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:28.835558 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:28.840720 systemd-logind[1666]: New session 27 of user core. Sep 4 17:32:28.848015 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:32:29.340255 sshd[6336]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:29.343201 systemd[1]: sshd@24-10.200.8.42:22-10.200.16.10:49734.service: Deactivated successfully. Sep 4 17:32:29.345352 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:32:29.347057 systemd-logind[1666]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:32:29.348285 systemd-logind[1666]: Removed session 27. Sep 4 17:32:34.461177 systemd[1]: Started sshd@25-10.200.8.42:22-10.200.16.10:56104.service - OpenSSH per-connection server daemon (10.200.16.10:56104). Sep 4 17:32:35.079267 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 56104 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:35.080774 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:35.085304 systemd-logind[1666]: New session 28 of user core. Sep 4 17:32:35.092053 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:32:35.578111 sshd[6350]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:35.583236 systemd[1]: sshd@25-10.200.8.42:22-10.200.16.10:56104.service: Deactivated successfully. Sep 4 17:32:35.585508 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:32:35.587306 systemd-logind[1666]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:32:35.588310 systemd-logind[1666]: Removed session 28. Sep 4 17:32:40.693154 systemd[1]: Started sshd@26-10.200.8.42:22-10.200.16.10:45264.service - OpenSSH per-connection server daemon (10.200.16.10:45264). Sep 4 17:32:41.310712 sshd[6399]: Accepted publickey for core from 10.200.16.10 port 45264 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:41.312218 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:41.317142 systemd-logind[1666]: New session 29 of user core. Sep 4 17:32:41.321389 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 17:32:41.806116 sshd[6399]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:41.811687 systemd-logind[1666]: Session 29 logged out. Waiting for processes to exit. Sep 4 17:32:41.814201 systemd[1]: sshd@26-10.200.8.42:22-10.200.16.10:45264.service: Deactivated successfully. Sep 4 17:32:41.818056 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 17:32:41.820206 systemd-logind[1666]: Removed session 29. Sep 4 17:32:46.915075 systemd[1]: Started sshd@27-10.200.8.42:22-10.200.16.10:45268.service - OpenSSH per-connection server daemon (10.200.16.10:45268). Sep 4 17:32:47.537216 sshd[6412]: Accepted publickey for core from 10.200.16.10 port 45268 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:47.539015 sshd[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:47.543808 systemd-logind[1666]: New session 30 of user core. Sep 4 17:32:47.552015 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 17:32:48.036177 sshd[6412]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:48.039637 systemd[1]: sshd@27-10.200.8.42:22-10.200.16.10:45268.service: Deactivated successfully. Sep 4 17:32:48.042434 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 17:32:48.044345 systemd-logind[1666]: Session 30 logged out. Waiting for processes to exit. Sep 4 17:32:48.045523 systemd-logind[1666]: Removed session 30. Sep 4 17:32:53.146923 systemd[1]: Started sshd@28-10.200.8.42:22-10.200.16.10:44238.service - OpenSSH per-connection server daemon (10.200.16.10:44238). Sep 4 17:32:53.774894 sshd[6454]: Accepted publickey for core from 10.200.16.10 port 44238 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:32:53.776611 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:53.782514 systemd-logind[1666]: New session 31 of user core. Sep 4 17:32:53.789286 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 4 17:32:54.267663 sshd[6454]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:54.272355 systemd[1]: sshd@28-10.200.8.42:22-10.200.16.10:44238.service: Deactivated successfully. Sep 4 17:32:54.274697 systemd[1]: session-31.scope: Deactivated successfully. Sep 4 17:32:54.275572 systemd-logind[1666]: Session 31 logged out. Waiting for processes to exit. Sep 4 17:32:54.276500 systemd-logind[1666]: Removed session 31. Sep 4 17:32:59.383333 systemd[1]: Started sshd@29-10.200.8.42:22-10.200.16.10:54146.service - OpenSSH per-connection server daemon (10.200.16.10:54146). Sep 4 17:33:00.035061 sshd[6472]: Accepted publickey for core from 10.200.16.10 port 54146 ssh2: RSA SHA256:lj/yPXjzMohcs9VKZQq3N6FDs1hBckP1QCzeuyJtO5A Sep 4 17:33:00.036886 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:00.041462 systemd-logind[1666]: New session 32 of user core. Sep 4 17:33:00.048007 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 4 17:33:00.525906 sshd[6472]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:00.530377 systemd[1]: sshd@29-10.200.8.42:22-10.200.16.10:54146.service: Deactivated successfully. Sep 4 17:33:00.533048 systemd[1]: session-32.scope: Deactivated successfully. Sep 4 17:33:00.534047 systemd-logind[1666]: Session 32 logged out. Waiting for processes to exit. Sep 4 17:33:00.535299 systemd-logind[1666]: Removed session 32.