Jun 25 18:42:38.102932 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:42:38.102970 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:38.102986 kernel: BIOS-provided physical RAM map: Jun 25 18:42:38.102997 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:42:38.103008 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 18:42:38.103019 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 18:42:38.103030 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 18:42:38.103042 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 18:42:38.103054 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 18:42:38.103065 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 18:42:38.103076 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 18:42:38.103085 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 18:42:38.103095 kernel: printk: bootconsole [earlyser0] enabled Jun 25 18:42:38.103108 kernel: NX (Execute Disable) protection: active Jun 25 18:42:38.103123 kernel: APIC: Static calls initialized Jun 25 18:42:38.103135 kernel: efi: EFI v2.7 by Microsoft Jun 25 18:42:38.103148 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jun 25 18:42:38.103159 kernel: SMBIOS 3.1.0 present. Jun 25 18:42:38.103172 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 18:42:38.103184 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 18:42:38.103196 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 18:42:38.103207 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 18:42:38.103219 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 18:42:38.103232 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 18:42:38.103248 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 18:42:38.103262 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:38.103275 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:38.103289 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 18:42:38.103302 kernel: tsc: Detected 2593.907 MHz processor Jun 25 18:42:38.103316 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:42:38.103330 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:42:38.103343 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 18:42:38.103356 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:42:38.103373 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:42:38.103386 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 18:42:38.103398 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 18:42:38.103411 kernel: Using GB pages for direct mapping Jun 25 18:42:38.103441 kernel: Secure boot disabled Jun 25 18:42:38.103454 kernel: ACPI: Early table checksum verification disabled Jun 25 18:42:38.103467 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 18:42:38.103487 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103505 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103519 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:42:38.103533 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 18:42:38.103547 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103561 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103576 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103592 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103606 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103620 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103634 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103648 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 18:42:38.103662 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 18:42:38.103676 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 18:42:38.103690 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 18:42:38.103707 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 18:42:38.103721 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 18:42:38.103735 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 18:42:38.103748 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 18:42:38.103763 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 18:42:38.103777 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 18:42:38.103791 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:42:38.103805 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:42:38.103823 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 18:42:38.103840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 18:42:38.103854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 18:42:38.103868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 18:42:38.103883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 18:42:38.103897 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 18:42:38.103911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 18:42:38.103924 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 18:42:38.103938 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 18:42:38.103952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 18:42:38.103969 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 18:42:38.103984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 18:42:38.103998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 18:42:38.104011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 18:42:38.104025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 18:42:38.104039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 18:42:38.104054 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 18:42:38.104068 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 18:42:38.104082 kernel: Zone ranges: Jun 25 18:42:38.104099 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:42:38.104113 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 18:42:38.104127 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:38.104141 kernel: Movable zone start for each node Jun 25 18:42:38.104155 kernel: Early memory node ranges Jun 25 18:42:38.104169 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:42:38.104183 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 18:42:38.104197 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 18:42:38.104211 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:38.104229 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 18:42:38.104243 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:42:38.104258 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:42:38.104272 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 18:42:38.104285 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 18:42:38.104299 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 18:42:38.104312 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:42:38.104326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:42:38.104341 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:42:38.104358 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 18:42:38.104372 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:42:38.104386 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 18:42:38.104399 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 18:42:38.104414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:42:38.107404 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:42:38.107434 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:42:38.107449 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:42:38.107462 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:42:38.107480 kernel: Hyper-V: PV spinlocks enabled Jun 25 18:42:38.107494 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:42:38.107510 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:38.107525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:42:38.107538 kernel: random: crng init done Jun 25 18:42:38.107551 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 18:42:38.107564 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:42:38.107576 kernel: Fallback order for Node 0: 0 Jun 25 18:42:38.107592 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 18:42:38.107616 kernel: Policy zone: Normal Jun 25 18:42:38.107631 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:42:38.107645 kernel: software IO TLB: area num 2. Jun 25 18:42:38.107659 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 316268K reserved, 0K cma-reserved) Jun 25 18:42:38.107673 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:42:38.107687 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:42:38.107701 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:42:38.107715 kernel: Dynamic Preempt: voluntary Jun 25 18:42:38.107730 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:42:38.107747 kernel: rcu: RCU event tracing is enabled. Jun 25 18:42:38.107767 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:42:38.107783 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:42:38.107798 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:42:38.107814 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:42:38.107830 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:42:38.107850 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:42:38.107865 kernel: Using NULL legacy PIC Jun 25 18:42:38.107881 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 18:42:38.107897 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:42:38.107912 kernel: Console: colour dummy device 80x25 Jun 25 18:42:38.107928 kernel: printk: console [tty1] enabled Jun 25 18:42:38.107944 kernel: printk: console [ttyS0] enabled Jun 25 18:42:38.107959 kernel: printk: bootconsole [earlyser0] disabled Jun 25 18:42:38.107975 kernel: ACPI: Core revision 20230628 Jun 25 18:42:38.107991 kernel: Failed to register legacy timer interrupt Jun 25 18:42:38.108010 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:42:38.108025 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:42:38.108040 kernel: Hyper-V: Using IPI hypercalls Jun 25 18:42:38.108054 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 25 18:42:38.108067 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 25 18:42:38.108081 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 25 18:42:38.108096 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 25 18:42:38.108111 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 25 18:42:38.108125 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 25 18:42:38.108143 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jun 25 18:42:38.108156 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:42:38.108169 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:42:38.108182 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:42:38.108196 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:42:38.108209 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:42:38.108221 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:42:38.108235 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:42:38.108249 kernel: RETBleed: Vulnerable Jun 25 18:42:38.108267 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:42:38.108281 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:38.108294 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:38.108308 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:42:38.108321 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:42:38.108334 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:42:38.108348 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:42:38.108361 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:42:38.108375 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:42:38.108389 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:42:38.108404 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:42:38.108433 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 18:42:38.112333 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 18:42:38.112352 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 18:42:38.112366 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 18:42:38.112381 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:42:38.112395 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:42:38.112408 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:42:38.112434 kernel: SELinux: Initializing. Jun 25 18:42:38.112448 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:38.112462 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:38.112475 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:42:38.112490 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:38.112510 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:38.112525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:38.112540 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:42:38.112555 kernel: signal: max sigframe size: 3632 Jun 25 18:42:38.112568 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:42:38.112583 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:42:38.112598 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:42:38.112612 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:42:38.112626 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:42:38.112645 kernel: .... node #0, CPUs: #1 Jun 25 18:42:38.112659 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 18:42:38.112676 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:42:38.112691 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:42:38.112704 kernel: smpboot: Max logical packages: 1 Jun 25 18:42:38.112718 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 18:42:38.112732 kernel: devtmpfs: initialized Jun 25 18:42:38.112744 kernel: x86/mm: Memory block size: 128MB Jun 25 18:42:38.112762 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 18:42:38.112777 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:42:38.112791 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:42:38.112806 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:42:38.112822 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:42:38.112837 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:42:38.112849 kernel: audit: type=2000 audit(1719340957.028:1): state=initialized audit_enabled=0 res=1 Jun 25 18:42:38.112862 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:42:38.112875 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:42:38.112892 kernel: cpuidle: using governor menu Jun 25 18:42:38.112905 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:42:38.112918 kernel: dca service started, version 1.12.1 Jun 25 18:42:38.112931 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 18:42:38.112945 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:42:38.112959 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:42:38.112973 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:42:38.112986 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:42:38.113000 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:42:38.113018 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:42:38.113031 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:42:38.113045 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:42:38.113059 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:42:38.113073 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:42:38.113088 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:42:38.113102 kernel: ACPI: Interpreter enabled Jun 25 18:42:38.113116 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:42:38.113129 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:42:38.113146 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:42:38.113160 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 18:42:38.113175 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 18:42:38.113190 kernel: iommu: Default domain type: Translated Jun 25 18:42:38.113204 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:42:38.113219 kernel: efivars: Registered efivars operations Jun 25 18:42:38.113234 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:42:38.113248 kernel: PCI: System does not support PCI Jun 25 18:42:38.113263 kernel: vgaarb: loaded Jun 25 18:42:38.113282 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 18:42:38.113296 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:42:38.113312 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:42:38.113327 kernel: pnp: PnP ACPI init Jun 25 18:42:38.113342 kernel: pnp: PnP ACPI: found 3 devices Jun 25 18:42:38.113357 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:42:38.113372 kernel: NET: Registered PF_INET protocol family Jun 25 18:42:38.113387 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:42:38.113401 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 18:42:38.113431 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:42:38.113455 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:42:38.113470 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 18:42:38.113485 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 18:42:38.113500 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:38.113514 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:38.113529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:42:38.113544 kernel: NET: Registered PF_XDP protocol family Jun 25 18:42:38.113559 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:42:38.113577 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 18:42:38.113592 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jun 25 18:42:38.113607 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:42:38.113622 kernel: Initialise system trusted keyrings Jun 25 18:42:38.113637 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 18:42:38.113652 kernel: Key type asymmetric registered Jun 25 18:42:38.113667 kernel: Asymmetric key parser 'x509' registered Jun 25 18:42:38.113681 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:42:38.113696 kernel: io scheduler mq-deadline registered Jun 25 18:42:38.113714 kernel: io scheduler kyber registered Jun 25 18:42:38.113728 kernel: io scheduler bfq registered Jun 25 18:42:38.113743 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:42:38.113758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:42:38.113773 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:42:38.113787 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 18:42:38.113802 kernel: i8042: PNP: No PS/2 controller found. Jun 25 18:42:38.113992 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 18:42:38.114122 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T18:42:37 UTC (1719340957) Jun 25 18:42:38.114240 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 18:42:38.114259 kernel: intel_pstate: CPU model not supported Jun 25 18:42:38.114275 kernel: efifb: probing for efifb Jun 25 18:42:38.114290 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:42:38.114306 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:42:38.114320 kernel: efifb: scrolling: redraw Jun 25 18:42:38.114335 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:42:38.114355 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:42:38.114370 kernel: fb0: EFI VGA frame buffer device Jun 25 18:42:38.114385 kernel: pstore: Using crash dump compression: deflate Jun 25 18:42:38.114400 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:42:38.114415 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:42:38.114491 kernel: Segment Routing with IPv6 Jun 25 18:42:38.114506 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:42:38.114521 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:42:38.114536 kernel: Key type dns_resolver registered Jun 25 18:42:38.114551 kernel: IPI shorthand broadcast: enabled Jun 25 18:42:38.114570 kernel: sched_clock: Marking stable (840002900, 45503500)->(1110488800, -224982400) Jun 25 18:42:38.114585 kernel: registered taskstats version 1 Jun 25 18:42:38.114600 kernel: Loading compiled-in X.509 certificates Jun 25 18:42:38.114614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:42:38.114629 kernel: Key type .fscrypt registered Jun 25 18:42:38.114644 kernel: Key type fscrypt-provisioning registered Jun 25 18:42:38.114659 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:42:38.114674 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:42:38.114692 kernel: ima: No architecture policies found Jun 25 18:42:38.114707 kernel: clk: Disabling unused clocks Jun 25 18:42:38.114722 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:42:38.114737 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:42:38.114752 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:42:38.114767 kernel: Run /init as init process Jun 25 18:42:38.114782 kernel: with arguments: Jun 25 18:42:38.114796 kernel: /init Jun 25 18:42:38.114811 kernel: with environment: Jun 25 18:42:38.114828 kernel: HOME=/ Jun 25 18:42:38.114843 kernel: TERM=linux Jun 25 18:42:38.114858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:42:38.114875 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:42:38.114894 systemd[1]: Detected virtualization microsoft. Jun 25 18:42:38.114910 systemd[1]: Detected architecture x86-64. Jun 25 18:42:38.114925 systemd[1]: Running in initrd. Jun 25 18:42:38.114941 systemd[1]: No hostname configured, using default hostname. Jun 25 18:42:38.114959 systemd[1]: Hostname set to . Jun 25 18:42:38.114976 systemd[1]: Initializing machine ID from random generator. Jun 25 18:42:38.114991 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:42:38.115007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:38.115023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:38.115040 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:42:38.115055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:42:38.115071 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:42:38.115090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:42:38.115108 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:42:38.115125 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:42:38.115140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:38.115155 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:38.115171 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:42:38.115187 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:42:38.115206 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:42:38.115221 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:42:38.115237 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:38.115254 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:38.115270 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:42:38.115285 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:42:38.115301 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:38.115317 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:38.115337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:38.115353 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:42:38.115369 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:42:38.115384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:42:38.115400 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:42:38.115416 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:42:38.121621 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:42:38.121643 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:42:38.121658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:38.121711 systemd-journald[176]: Collecting audit messages is disabled. Jun 25 18:42:38.121748 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:38.121765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:38.121782 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:42:38.121806 systemd-journald[176]: Journal started Jun 25 18:42:38.121853 systemd-journald[176]: Runtime Journal (/run/log/journal/dc07496189f64b2a933b3c6373702c0a) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:42:38.094988 systemd-modules-load[177]: Inserted module 'overlay' Jun 25 18:42:38.138599 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:42:38.135575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:38.147451 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:42:38.152591 systemd-modules-load[177]: Inserted module 'br_netfilter' Jun 25 18:42:38.157806 kernel: Bridge firewalling registered Jun 25 18:42:38.155060 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:38.163585 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:42:38.170601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:42:38.175716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:38.182499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:38.189651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:42:38.197521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:42:38.209388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:38.216239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:38.222923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:38.228685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:42:38.234656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:38.248573 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:42:38.267255 dracut-cmdline[215]: dracut-dracut-053 Jun 25 18:42:38.271311 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:38.294616 systemd-resolved[209]: Positive Trust Anchors: Jun 25 18:42:38.294637 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:42:38.294692 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:42:38.299351 systemd-resolved[209]: Defaulting to hostname 'linux'. Jun 25 18:42:38.300364 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:42:38.324478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:38.354447 kernel: SCSI subsystem initialized Jun 25 18:42:38.366444 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:42:38.379451 kernel: iscsi: registered transport (tcp) Jun 25 18:42:38.405762 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:42:38.405854 kernel: QLogic iSCSI HBA Driver Jun 25 18:42:38.441748 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:38.451630 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:42:38.485111 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:42:38.485217 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:42:38.488323 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:42:38.531458 kernel: raid6: avx512x4 gen() 18437 MB/s Jun 25 18:42:38.552451 kernel: raid6: avx512x2 gen() 18382 MB/s Jun 25 18:42:38.571437 kernel: raid6: avx512x1 gen() 18309 MB/s Jun 25 18:42:38.589438 kernel: raid6: avx2x4 gen() 18303 MB/s Jun 25 18:42:38.608438 kernel: raid6: avx2x2 gen() 18240 MB/s Jun 25 18:42:38.628364 kernel: raid6: avx2x1 gen() 14104 MB/s Jun 25 18:42:38.628401 kernel: raid6: using algorithm avx512x4 gen() 18437 MB/s Jun 25 18:42:38.649540 kernel: raid6: .... xor() 7171 MB/s, rmw enabled Jun 25 18:42:38.649595 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:42:38.677449 kernel: xor: automatically using best checksumming function avx Jun 25 18:42:38.846453 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:42:38.856497 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:38.866623 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:38.880592 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jun 25 18:42:38.884971 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:38.897601 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:42:38.910480 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jun 25 18:42:38.937038 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:38.952685 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:42:38.993254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:39.004223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:42:39.026031 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:39.036255 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:39.042517 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:39.045736 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:42:39.059596 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:42:39.088752 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:39.106925 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:42:39.122150 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 18:42:39.126672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:39.126909 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:39.169018 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:42:39.169048 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:42:39.169061 kernel: PTP clock support registered Jun 25 18:42:39.169078 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:42:39.169088 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:42:39.169103 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:42:39.131541 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:39.177898 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:42:39.137578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:39.137821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:39.142987 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:39.174825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:39.204381 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:42:39.204458 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:42:39.195778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:39.195936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:39.214445 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:42:40.003810 systemd-resolved[209]: Clock change detected. Flushing caches. Jun 25 18:42:40.022521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:40.035410 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:42:40.035458 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:42:40.042548 kernel: scsi host1: storvsc_host_t Jun 25 18:42:40.042782 kernel: scsi host0: storvsc_host_t Jun 25 18:42:40.047372 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:42:40.048363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:40.060326 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:42:40.060363 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:42:40.060397 kernel: AES CTR mode by8 optimization enabled Jun 25 18:42:40.067375 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:42:40.075400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:40.095769 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:42:40.118292 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:42:40.118340 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:42:40.124671 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:40.139849 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:42:40.145516 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:42:40.145541 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:42:40.161239 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:42:40.174954 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:42:40.175146 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:42:40.175320 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:42:40.176330 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:42:40.176519 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:40.176541 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:42:40.282109 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: VF slot 1 added Jun 25 18:42:40.290366 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:42:40.294925 kernel: hv_pci 9282ae27-a8c9-4978-95c8-5f26c2dbf005: PCI VMBus probing: Using version 0x10004 Jun 25 18:42:40.341992 kernel: hv_pci 9282ae27-a8c9-4978-95c8-5f26c2dbf005: PCI host bridge to bus a8c9:00 Jun 25 18:42:40.342173 kernel: pci_bus a8c9:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 18:42:40.342936 kernel: pci_bus a8c9:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:42:40.343097 kernel: pci a8c9:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 18:42:40.343292 kernel: pci a8c9:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:40.343487 kernel: pci a8c9:00:02.0: enabling Extended Tags Jun 25 18:42:40.343657 kernel: pci a8c9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a8c9:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 18:42:40.343824 kernel: pci_bus a8c9:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:42:40.343968 kernel: pci a8c9:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:40.533014 kernel: mlx5_core a8c9:00:02.0: enabling device (0000 -> 0002) Jun 25 18:42:40.773227 kernel: mlx5_core a8c9:00:02.0: firmware version: 14.30.1284 Jun 25 18:42:40.773474 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: VF registering: eth1 Jun 25 18:42:40.773637 kernel: mlx5_core a8c9:00:02.0 eth1: joined to eth0 Jun 25 18:42:40.774402 kernel: mlx5_core a8c9:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 25 18:42:40.711182 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:42:40.783410 kernel: mlx5_core a8c9:00:02.0 enP43209s1: renamed from eth1 Jun 25 18:42:40.801366 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (446) Jun 25 18:42:40.818841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:42:40.828450 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (462) Jun 25 18:42:40.836674 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:42:40.851264 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:42:40.854506 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:42:40.882472 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:42:40.896473 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:40.907369 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:40.915383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:41.915372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:41.915816 disk-uuid[601]: The operation has completed successfully. Jun 25 18:42:41.996123 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:42:41.996247 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:42:42.017465 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:42:42.023385 sh[714]: Success Jun 25 18:42:42.054574 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:42:42.273307 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:42:42.289169 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:42:42.294294 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:42:42.312926 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:42:42.312981 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:42.316994 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:42:42.319826 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:42:42.322335 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:42:42.617922 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:42:42.623143 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:42:42.639505 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:42:42.645537 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:42:42.659961 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:42.660025 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:42.662533 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:42.685378 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:42.700570 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:42.700135 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:42:42.710915 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:42:42.720541 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:42:42.746162 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:42.758505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:42:42.780578 systemd-networkd[898]: lo: Link UP Jun 25 18:42:42.780588 systemd-networkd[898]: lo: Gained carrier Jun 25 18:42:42.782672 systemd-networkd[898]: Enumeration completed Jun 25 18:42:42.782962 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:42:42.784536 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:42.784541 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:42:42.786138 systemd[1]: Reached target network.target - Network. Jun 25 18:42:42.854379 kernel: mlx5_core a8c9:00:02.0 enP43209s1: Link up Jun 25 18:42:42.888802 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: Data path switched to VF: enP43209s1 Jun 25 18:42:42.888422 systemd-networkd[898]: enP43209s1: Link UP Jun 25 18:42:42.888543 systemd-networkd[898]: eth0: Link UP Jun 25 18:42:42.888702 systemd-networkd[898]: eth0: Gained carrier Jun 25 18:42:42.888714 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:42.899757 systemd-networkd[898]: enP43209s1: Gained carrier Jun 25 18:42:42.932412 systemd-networkd[898]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:42:43.958682 ignition[857]: Ignition 2.19.0 Jun 25 18:42:43.958698 ignition[857]: Stage: fetch-offline Jun 25 18:42:43.960610 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:43.958750 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:43.958763 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:43.958924 ignition[857]: parsed url from cmdline: "" Jun 25 18:42:43.958931 ignition[857]: no config URL provided Jun 25 18:42:43.958941 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:43.976477 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:42:43.958954 ignition[857]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:43.958963 ignition[857]: failed to fetch config: resource requires networking Jun 25 18:42:43.959574 ignition[857]: Ignition finished successfully Jun 25 18:42:43.991956 ignition[907]: Ignition 2.19.0 Jun 25 18:42:43.991966 ignition[907]: Stage: fetch Jun 25 18:42:43.992189 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:43.992199 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:43.992283 ignition[907]: parsed url from cmdline: "" Jun 25 18:42:43.992287 ignition[907]: no config URL provided Jun 25 18:42:43.992291 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:43.992299 ignition[907]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:43.992320 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:42:44.099141 ignition[907]: GET result: OK Jun 25 18:42:44.099304 ignition[907]: config has been read from IMDS userdata Jun 25 18:42:44.099339 ignition[907]: parsing config with SHA512: 0ff1fc9ad4980901e202f1778ace6f6827c2f851d4bfc1f33f267b9054ff4d9d03768a59ab084f1c94238cf7246047c70e4e255a3187919bb59db90a1c754aa9 Jun 25 18:42:44.104474 unknown[907]: fetched base config from "system" Jun 25 18:42:44.104508 unknown[907]: fetched base config from "system" Jun 25 18:42:44.104875 ignition[907]: fetch: fetch complete Jun 25 18:42:44.104516 unknown[907]: fetched user config from "azure" Jun 25 18:42:44.104880 ignition[907]: fetch: fetch passed Jun 25 18:42:44.106492 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:42:44.104929 ignition[907]: Ignition finished successfully Jun 25 18:42:44.115653 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:42:44.131566 ignition[914]: Ignition 2.19.0 Jun 25 18:42:44.131577 ignition[914]: Stage: kargs Jun 25 18:42:44.131813 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.135028 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:42:44.131825 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.132720 ignition[914]: kargs: kargs passed Jun 25 18:42:44.132766 ignition[914]: Ignition finished successfully Jun 25 18:42:44.149528 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:42:44.151590 systemd-networkd[898]: eth0: Gained IPv6LL Jun 25 18:42:44.168058 ignition[921]: Ignition 2.19.0 Jun 25 18:42:44.168067 ignition[921]: Stage: disks Jun 25 18:42:44.169941 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:42:44.168293 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.174115 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:44.168305 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.179018 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:42:44.169131 ignition[921]: disks: disks passed Jun 25 18:42:44.182082 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:42:44.169169 ignition[921]: Ignition finished successfully Jun 25 18:42:44.189374 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:42:44.191756 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:42:44.208504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:42:44.262850 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:42:44.268078 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:42:44.276447 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:42:44.343754 systemd-networkd[898]: enP43209s1: Gained IPv6LL Jun 25 18:42:44.386371 kernel: EXT4-fs (sda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:42:44.386507 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:42:44.391221 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:42:44.433453 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:44.437378 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:42:44.449409 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Jun 25 18:42:44.451548 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:42:44.461424 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:44.466641 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:44.466698 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:44.464988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:42:44.478113 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:44.465031 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:44.481966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:44.484542 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:42:44.495501 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:42:45.180216 coreos-metadata[943]: Jun 25 18:42:45.180 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:42:45.187133 coreos-metadata[943]: Jun 25 18:42:45.187 INFO Fetch successful Jun 25 18:42:45.187133 coreos-metadata[943]: Jun 25 18:42:45.187 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:42:45.207978 coreos-metadata[943]: Jun 25 18:42:45.207 INFO Fetch successful Jun 25 18:42:45.239440 coreos-metadata[943]: Jun 25 18:42:45.239 INFO wrote hostname ci-4012.0.0-a-7f29c71dfa to /sysroot/etc/hostname Jun 25 18:42:45.241254 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:45.337018 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:42:45.358681 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:42:45.380120 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:42:45.387001 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:42:46.152182 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:46.163455 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:42:46.170457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:42:46.178105 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:46.179364 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:42:46.210381 ignition[1063]: INFO : Ignition 2.19.0 Jun 25 18:42:46.210381 ignition[1063]: INFO : Stage: mount Jun 25 18:42:46.210381 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:46.210381 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:46.225952 ignition[1063]: INFO : mount: mount passed Jun 25 18:42:46.225952 ignition[1063]: INFO : Ignition finished successfully Jun 25 18:42:46.215994 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:42:46.222306 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:42:46.235690 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:42:46.247519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:46.257407 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1076) Jun 25 18:42:46.263429 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:46.263478 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:46.265784 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:46.272375 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:46.273536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:46.295201 ignition[1093]: INFO : Ignition 2.19.0 Jun 25 18:42:46.295201 ignition[1093]: INFO : Stage: files Jun 25 18:42:46.299254 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:46.299254 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:46.299254 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:42:46.315805 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:42:46.315805 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:42:46.402746 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:42:46.409678 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:42:46.413654 unknown[1093]: wrote ssh authorized keys file for user: core Jun 25 18:42:46.416286 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:42:46.416286 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:46.416286 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:42:46.676363 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:42:46.774148 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:46.779199 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:46.779199 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:46.787939 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:46.792392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:46.792392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:46.801377 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:46.801377 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:46.801377 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:46.814790 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:46.819160 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:46.823526 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:46.830174 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:46.830174 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:46.830174 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:42:47.386309 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:42:47.680257 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:47.680257 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:42:47.701825 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:47.707166 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:47.707166 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:42:47.707166 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:47.723622 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:47.723622 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:47.723622 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:47.723622 ignition[1093]: INFO : files: files passed Jun 25 18:42:47.723622 ignition[1093]: INFO : Ignition finished successfully Jun 25 18:42:47.709132 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:42:47.734301 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:42:47.752516 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:42:47.755560 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:42:47.761501 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:42:47.768448 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:47.768448 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:47.776799 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:47.773269 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:47.779904 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:42:47.798502 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:42:47.824818 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:42:47.824933 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:42:47.832909 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:42:47.838561 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:42:47.846311 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:42:47.858601 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:42:47.873184 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:47.883527 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:42:47.896084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:47.897267 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:47.898049 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:42:47.898879 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:42:47.899016 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:47.899723 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:42:47.900155 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:42:47.900596 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:42:47.901001 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:47.901424 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:47.901847 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:42:47.902248 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:47.903185 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:42:47.903692 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:42:47.904080 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:42:47.904461 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:42:47.904588 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:47.905337 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:47.905772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:47.906289 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:42:47.944694 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:47.955303 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:42:47.955480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:47.967675 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:42:47.967811 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:47.976075 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:42:47.976217 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:42:47.980891 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:42:47.981028 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:48.054364 ignition[1146]: INFO : Ignition 2.19.0 Jun 25 18:42:48.054364 ignition[1146]: INFO : Stage: umount Jun 25 18:42:48.054364 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:48.054364 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:48.054364 ignition[1146]: INFO : umount: umount passed Jun 25 18:42:48.054364 ignition[1146]: INFO : Ignition finished successfully Jun 25 18:42:48.011444 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:42:48.017615 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:42:48.018017 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:48.027525 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:42:48.034094 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:42:48.034279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:48.038676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:42:48.038839 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:48.046592 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:42:48.046701 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:42:48.054619 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:42:48.054706 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:42:48.064112 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:42:48.064165 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:42:48.069490 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:42:48.069539 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:42:48.073608 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:42:48.073655 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:42:48.078038 systemd[1]: Stopped target network.target - Network. Jun 25 18:42:48.080438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:42:48.080497 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:48.085820 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:42:48.090298 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:42:48.095646 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:48.098935 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:42:48.107343 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:42:48.111148 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:42:48.111201 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:48.113821 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:42:48.113873 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:48.119133 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:42:48.119186 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:42:48.123693 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:42:48.123735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:48.129393 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:42:48.136476 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:42:48.141398 systemd-networkd[898]: eth0: DHCPv6 lease lost Jun 25 18:42:48.143556 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:42:48.144181 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:42:48.144296 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:42:48.154087 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:42:48.154241 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:42:48.166292 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:42:48.166383 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:48.180131 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:42:48.185918 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:42:48.187776 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:48.194625 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:42:48.194686 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:48.200195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:42:48.200243 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:48.204898 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:42:48.204949 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:48.212493 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:48.296954 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: Data path switched from VF: enP43209s1 Jun 25 18:42:48.245888 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:42:48.246037 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:48.250920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:42:48.251000 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:48.255332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:42:48.255405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:48.262426 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:42:48.262476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:48.263261 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:42:48.263295 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:48.264090 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:48.264128 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:48.304513 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:42:48.310781 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:42:48.310891 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:48.316832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:48.316893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:48.333183 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:42:48.333311 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:42:48.337238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:42:48.337336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:42:49.421204 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:42:49.421402 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:42:49.428897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:42:49.431707 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:42:49.431776 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:49.447522 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:42:49.456020 systemd[1]: Switching root. Jun 25 18:42:49.511950 systemd-journald[176]: Journal stopped Jun 25 18:42:38.102932 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:42:38.102970 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:38.102986 kernel: BIOS-provided physical RAM map: Jun 25 18:42:38.102997 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:42:38.103008 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 18:42:38.103019 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 18:42:38.103030 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 18:42:38.103042 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 18:42:38.103054 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 18:42:38.103065 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 18:42:38.103076 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 18:42:38.103085 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 18:42:38.103095 kernel: printk: bootconsole [earlyser0] enabled Jun 25 18:42:38.103108 kernel: NX (Execute Disable) protection: active Jun 25 18:42:38.103123 kernel: APIC: Static calls initialized Jun 25 18:42:38.103135 kernel: efi: EFI v2.7 by Microsoft Jun 25 18:42:38.103148 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jun 25 18:42:38.103159 kernel: SMBIOS 3.1.0 present. Jun 25 18:42:38.103172 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 18:42:38.103184 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 18:42:38.103196 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 18:42:38.103207 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 18:42:38.103219 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 18:42:38.103232 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 18:42:38.103248 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 18:42:38.103262 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:38.103275 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:38.103289 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 18:42:38.103302 kernel: tsc: Detected 2593.907 MHz processor Jun 25 18:42:38.103316 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:42:38.103330 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:42:38.103343 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 18:42:38.103356 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:42:38.103373 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:42:38.103386 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 18:42:38.103398 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 18:42:38.103411 kernel: Using GB pages for direct mapping Jun 25 18:42:38.103441 kernel: Secure boot disabled Jun 25 18:42:38.103454 kernel: ACPI: Early table checksum verification disabled Jun 25 18:42:38.103467 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 18:42:38.103487 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103505 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103519 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:42:38.103533 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 18:42:38.103547 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103561 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103576 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103592 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103606 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103620 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103634 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:38.103648 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 18:42:38.103662 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 18:42:38.103676 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 18:42:38.103690 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 18:42:38.103707 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 18:42:38.103721 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 18:42:38.103735 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 18:42:38.103748 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 18:42:38.103763 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 18:42:38.103777 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 18:42:38.103791 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:42:38.103805 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:42:38.103823 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 18:42:38.103840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 18:42:38.103854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 18:42:38.103868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 18:42:38.103883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 18:42:38.103897 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 18:42:38.103911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 18:42:38.103924 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 18:42:38.103938 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 18:42:38.103952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 18:42:38.103969 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 18:42:38.103984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 18:42:38.103998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 18:42:38.104011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 18:42:38.104025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 18:42:38.104039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 18:42:38.104054 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 18:42:38.104068 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 18:42:38.104082 kernel: Zone ranges: Jun 25 18:42:38.104099 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:42:38.104113 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 18:42:38.104127 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:38.104141 kernel: Movable zone start for each node Jun 25 18:42:38.104155 kernel: Early memory node ranges Jun 25 18:42:38.104169 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:42:38.104183 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 18:42:38.104197 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 18:42:38.104211 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:38.104229 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 18:42:38.104243 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:42:38.104258 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:42:38.104272 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 18:42:38.104285 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 18:42:38.104299 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 18:42:38.104312 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:42:38.104326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:42:38.104341 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:42:38.104358 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 18:42:38.104372 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:42:38.104386 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 18:42:38.104399 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 18:42:38.104414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:42:38.107404 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:42:38.107434 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:42:38.107449 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:42:38.107462 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:42:38.107480 kernel: Hyper-V: PV spinlocks enabled Jun 25 18:42:38.107494 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:42:38.107510 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:38.107525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:42:38.107538 kernel: random: crng init done Jun 25 18:42:38.107551 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 18:42:38.107564 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:42:38.107576 kernel: Fallback order for Node 0: 0 Jun 25 18:42:38.107592 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 18:42:38.107616 kernel: Policy zone: Normal Jun 25 18:42:38.107631 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:42:38.107645 kernel: software IO TLB: area num 2. Jun 25 18:42:38.107659 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 316268K reserved, 0K cma-reserved) Jun 25 18:42:38.107673 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:42:38.107687 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:42:38.107701 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:42:38.107715 kernel: Dynamic Preempt: voluntary Jun 25 18:42:38.107730 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:42:38.107747 kernel: rcu: RCU event tracing is enabled. Jun 25 18:42:38.107767 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:42:38.107783 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:42:38.107798 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:42:38.107814 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:42:38.107830 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:42:38.107850 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:42:38.107865 kernel: Using NULL legacy PIC Jun 25 18:42:38.107881 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 18:42:38.107897 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:42:38.107912 kernel: Console: colour dummy device 80x25 Jun 25 18:42:38.107928 kernel: printk: console [tty1] enabled Jun 25 18:42:38.107944 kernel: printk: console [ttyS0] enabled Jun 25 18:42:38.107959 kernel: printk: bootconsole [earlyser0] disabled Jun 25 18:42:38.107975 kernel: ACPI: Core revision 20230628 Jun 25 18:42:38.107991 kernel: Failed to register legacy timer interrupt Jun 25 18:42:38.108010 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:42:38.108025 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:42:38.108040 kernel: Hyper-V: Using IPI hypercalls Jun 25 18:42:38.108054 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 25 18:42:38.108067 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 25 18:42:38.108081 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 25 18:42:38.108096 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 25 18:42:38.108111 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 25 18:42:38.108125 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 25 18:42:38.108143 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jun 25 18:42:38.108156 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:42:38.108169 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:42:38.108182 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:42:38.108196 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:42:38.108209 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:42:38.108221 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:42:38.108235 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:42:38.108249 kernel: RETBleed: Vulnerable Jun 25 18:42:38.108267 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:42:38.108281 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:38.108294 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:38.108308 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:42:38.108321 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:42:38.108334 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:42:38.108348 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:42:38.108361 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:42:38.108375 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:42:38.108389 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:42:38.108404 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:42:38.108433 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 18:42:38.112333 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 18:42:38.112352 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 18:42:38.112366 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 18:42:38.112381 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:42:38.112395 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:42:38.112408 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:42:38.112434 kernel: SELinux: Initializing. Jun 25 18:42:38.112448 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:38.112462 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:38.112475 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:42:38.112490 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:38.112510 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:38.112525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:38.112540 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:42:38.112555 kernel: signal: max sigframe size: 3632 Jun 25 18:42:38.112568 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:42:38.112583 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:42:38.112598 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:42:38.112612 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:42:38.112626 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:42:38.112645 kernel: .... node #0, CPUs: #1 Jun 25 18:42:38.112659 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 18:42:38.112676 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:42:38.112691 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:42:38.112704 kernel: smpboot: Max logical packages: 1 Jun 25 18:42:38.112718 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 18:42:38.112732 kernel: devtmpfs: initialized Jun 25 18:42:38.112744 kernel: x86/mm: Memory block size: 128MB Jun 25 18:42:38.112762 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 18:42:38.112777 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:42:38.112791 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:42:38.112806 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:42:38.112822 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:42:38.112837 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:42:38.112849 kernel: audit: type=2000 audit(1719340957.028:1): state=initialized audit_enabled=0 res=1 Jun 25 18:42:38.112862 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:42:38.112875 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:42:38.112892 kernel: cpuidle: using governor menu Jun 25 18:42:38.112905 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:42:38.112918 kernel: dca service started, version 1.12.1 Jun 25 18:42:38.112931 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 18:42:38.112945 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:42:38.112959 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:42:38.112973 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:42:38.112986 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:42:38.113000 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:42:38.113018 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:42:38.113031 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:42:38.113045 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:42:38.113059 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:42:38.113073 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:42:38.113088 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:42:38.113102 kernel: ACPI: Interpreter enabled Jun 25 18:42:38.113116 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:42:38.113129 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:42:38.113146 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:42:38.113160 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 18:42:38.113175 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 18:42:38.113190 kernel: iommu: Default domain type: Translated Jun 25 18:42:38.113204 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:42:38.113219 kernel: efivars: Registered efivars operations Jun 25 18:42:38.113234 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:42:38.113248 kernel: PCI: System does not support PCI Jun 25 18:42:38.113263 kernel: vgaarb: loaded Jun 25 18:42:38.113282 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 18:42:38.113296 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:42:38.113312 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:42:38.113327 kernel: pnp: PnP ACPI init Jun 25 18:42:38.113342 kernel: pnp: PnP ACPI: found 3 devices Jun 25 18:42:38.113357 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:42:38.113372 kernel: NET: Registered PF_INET protocol family Jun 25 18:42:38.113387 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:42:38.113401 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 18:42:38.113431 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:42:38.113455 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:42:38.113470 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 18:42:38.113485 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 18:42:38.113500 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:38.113514 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:38.113529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:42:38.113544 kernel: NET: Registered PF_XDP protocol family Jun 25 18:42:38.113559 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:42:38.113577 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 18:42:38.113592 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jun 25 18:42:38.113607 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:42:38.113622 kernel: Initialise system trusted keyrings Jun 25 18:42:38.113637 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 18:42:38.113652 kernel: Key type asymmetric registered Jun 25 18:42:38.113667 kernel: Asymmetric key parser 'x509' registered Jun 25 18:42:38.113681 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:42:38.113696 kernel: io scheduler mq-deadline registered Jun 25 18:42:38.113714 kernel: io scheduler kyber registered Jun 25 18:42:38.113728 kernel: io scheduler bfq registered Jun 25 18:42:38.113743 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:42:38.113758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:42:38.113773 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:42:38.113787 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 18:42:38.113802 kernel: i8042: PNP: No PS/2 controller found. Jun 25 18:42:38.113992 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 18:42:38.114122 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T18:42:37 UTC (1719340957) Jun 25 18:42:38.114240 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 18:42:38.114259 kernel: intel_pstate: CPU model not supported Jun 25 18:42:38.114275 kernel: efifb: probing for efifb Jun 25 18:42:38.114290 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:42:38.114306 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:42:38.114320 kernel: efifb: scrolling: redraw Jun 25 18:42:38.114335 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:42:38.114355 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:42:38.114370 kernel: fb0: EFI VGA frame buffer device Jun 25 18:42:38.114385 kernel: pstore: Using crash dump compression: deflate Jun 25 18:42:38.114400 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:42:38.114415 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:42:38.114491 kernel: Segment Routing with IPv6 Jun 25 18:42:38.114506 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:42:38.114521 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:42:38.114536 kernel: Key type dns_resolver registered Jun 25 18:42:38.114551 kernel: IPI shorthand broadcast: enabled Jun 25 18:42:38.114570 kernel: sched_clock: Marking stable (840002900, 45503500)->(1110488800, -224982400) Jun 25 18:42:38.114585 kernel: registered taskstats version 1 Jun 25 18:42:38.114600 kernel: Loading compiled-in X.509 certificates Jun 25 18:42:38.114614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:42:38.114629 kernel: Key type .fscrypt registered Jun 25 18:42:38.114644 kernel: Key type fscrypt-provisioning registered Jun 25 18:42:38.114659 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:42:38.114674 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:42:38.114692 kernel: ima: No architecture policies found Jun 25 18:42:38.114707 kernel: clk: Disabling unused clocks Jun 25 18:42:38.114722 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:42:38.114737 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:42:38.114752 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:42:38.114767 kernel: Run /init as init process Jun 25 18:42:38.114782 kernel: with arguments: Jun 25 18:42:38.114796 kernel: /init Jun 25 18:42:38.114811 kernel: with environment: Jun 25 18:42:38.114828 kernel: HOME=/ Jun 25 18:42:38.114843 kernel: TERM=linux Jun 25 18:42:38.114858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:42:38.114875 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:42:38.114894 systemd[1]: Detected virtualization microsoft. Jun 25 18:42:38.114910 systemd[1]: Detected architecture x86-64. Jun 25 18:42:38.114925 systemd[1]: Running in initrd. Jun 25 18:42:38.114941 systemd[1]: No hostname configured, using default hostname. Jun 25 18:42:38.114959 systemd[1]: Hostname set to . Jun 25 18:42:38.114976 systemd[1]: Initializing machine ID from random generator. Jun 25 18:42:38.114991 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:42:38.115007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:38.115023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:38.115040 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:42:38.115055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:42:38.115071 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:42:38.115090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:42:38.115108 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:42:38.115125 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:42:38.115140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:38.115155 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:38.115171 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:42:38.115187 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:42:38.115206 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:42:38.115221 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:42:38.115237 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:38.115254 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:38.115270 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:42:38.115285 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:42:38.115301 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:38.115317 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:38.115337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:38.115353 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:42:38.115369 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:42:38.115384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:42:38.115400 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:42:38.115416 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:42:38.121621 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:42:38.121643 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:42:38.121658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:38.121711 systemd-journald[176]: Collecting audit messages is disabled. Jun 25 18:42:38.121748 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:38.121765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:38.121782 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:42:38.121806 systemd-journald[176]: Journal started Jun 25 18:42:38.121853 systemd-journald[176]: Runtime Journal (/run/log/journal/dc07496189f64b2a933b3c6373702c0a) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:42:38.094988 systemd-modules-load[177]: Inserted module 'overlay' Jun 25 18:42:38.138599 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:42:38.135575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:38.147451 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:42:38.152591 systemd-modules-load[177]: Inserted module 'br_netfilter' Jun 25 18:42:38.157806 kernel: Bridge firewalling registered Jun 25 18:42:38.155060 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:38.163585 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:42:38.170601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:42:38.175716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:38.182499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:38.189651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:42:38.197521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:42:38.209388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:38.216239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:38.222923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:38.228685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:42:38.234656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:38.248573 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:42:38.267255 dracut-cmdline[215]: dracut-dracut-053 Jun 25 18:42:38.271311 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:38.294616 systemd-resolved[209]: Positive Trust Anchors: Jun 25 18:42:38.294637 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:42:38.294692 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:42:38.299351 systemd-resolved[209]: Defaulting to hostname 'linux'. Jun 25 18:42:38.300364 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:42:38.324478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:38.354447 kernel: SCSI subsystem initialized Jun 25 18:42:38.366444 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:42:38.379451 kernel: iscsi: registered transport (tcp) Jun 25 18:42:38.405762 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:42:38.405854 kernel: QLogic iSCSI HBA Driver Jun 25 18:42:38.441748 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:38.451630 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:42:38.485111 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:42:38.485217 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:42:38.488323 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:42:38.531458 kernel: raid6: avx512x4 gen() 18437 MB/s Jun 25 18:42:38.552451 kernel: raid6: avx512x2 gen() 18382 MB/s Jun 25 18:42:38.571437 kernel: raid6: avx512x1 gen() 18309 MB/s Jun 25 18:42:38.589438 kernel: raid6: avx2x4 gen() 18303 MB/s Jun 25 18:42:38.608438 kernel: raid6: avx2x2 gen() 18240 MB/s Jun 25 18:42:38.628364 kernel: raid6: avx2x1 gen() 14104 MB/s Jun 25 18:42:38.628401 kernel: raid6: using algorithm avx512x4 gen() 18437 MB/s Jun 25 18:42:38.649540 kernel: raid6: .... xor() 7171 MB/s, rmw enabled Jun 25 18:42:38.649595 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:42:38.677449 kernel: xor: automatically using best checksumming function avx Jun 25 18:42:38.846453 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:42:38.856497 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:38.866623 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:38.880592 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jun 25 18:42:38.884971 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:38.897601 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:42:38.910480 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jun 25 18:42:38.937038 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:38.952685 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:42:38.993254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:39.004223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:42:39.026031 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:39.036255 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:39.042517 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:39.045736 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:42:39.059596 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:42:39.088752 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:39.106925 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:42:39.122150 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 18:42:39.126672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:39.126909 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:39.169018 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:42:39.169048 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:42:39.169061 kernel: PTP clock support registered Jun 25 18:42:39.169078 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:42:39.169088 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:42:39.169103 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:42:39.131541 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:39.177898 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:42:39.137578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:39.137821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:39.142987 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:39.174825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:39.204381 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:42:39.204458 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:42:39.195778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:39.195936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:39.214445 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:42:40.003810 systemd-resolved[209]: Clock change detected. Flushing caches. Jun 25 18:42:40.022521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:40.035410 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:42:40.035458 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:42:40.042548 kernel: scsi host1: storvsc_host_t Jun 25 18:42:40.042782 kernel: scsi host0: storvsc_host_t Jun 25 18:42:40.047372 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:42:40.048363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:40.060326 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:42:40.060363 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:42:40.060397 kernel: AES CTR mode by8 optimization enabled Jun 25 18:42:40.067375 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:42:40.075400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:40.095769 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:42:40.118292 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:42:40.118340 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:42:40.124671 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:40.139849 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:42:40.145516 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:42:40.145541 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:42:40.161239 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:42:40.174954 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:42:40.175146 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:42:40.175320 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:42:40.176330 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:42:40.176519 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:40.176541 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:42:40.282109 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: VF slot 1 added Jun 25 18:42:40.290366 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:42:40.294925 kernel: hv_pci 9282ae27-a8c9-4978-95c8-5f26c2dbf005: PCI VMBus probing: Using version 0x10004 Jun 25 18:42:40.341992 kernel: hv_pci 9282ae27-a8c9-4978-95c8-5f26c2dbf005: PCI host bridge to bus a8c9:00 Jun 25 18:42:40.342173 kernel: pci_bus a8c9:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 18:42:40.342936 kernel: pci_bus a8c9:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:42:40.343097 kernel: pci a8c9:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 18:42:40.343292 kernel: pci a8c9:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:40.343487 kernel: pci a8c9:00:02.0: enabling Extended Tags Jun 25 18:42:40.343657 kernel: pci a8c9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a8c9:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 18:42:40.343824 kernel: pci_bus a8c9:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:42:40.343968 kernel: pci a8c9:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:40.533014 kernel: mlx5_core a8c9:00:02.0: enabling device (0000 -> 0002) Jun 25 18:42:40.773227 kernel: mlx5_core a8c9:00:02.0: firmware version: 14.30.1284 Jun 25 18:42:40.773474 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: VF registering: eth1 Jun 25 18:42:40.773637 kernel: mlx5_core a8c9:00:02.0 eth1: joined to eth0 Jun 25 18:42:40.774402 kernel: mlx5_core a8c9:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 25 18:42:40.711182 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:42:40.783410 kernel: mlx5_core a8c9:00:02.0 enP43209s1: renamed from eth1 Jun 25 18:42:40.801366 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (446) Jun 25 18:42:40.818841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:42:40.828450 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (462) Jun 25 18:42:40.836674 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:42:40.851264 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:42:40.854506 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:42:40.882472 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:42:40.896473 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:40.907369 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:40.915383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:41.915372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:41.915816 disk-uuid[601]: The operation has completed successfully. Jun 25 18:42:41.996123 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:42:41.996247 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:42:42.017465 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:42:42.023385 sh[714]: Success Jun 25 18:42:42.054574 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:42:42.273307 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:42:42.289169 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:42:42.294294 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:42:42.312926 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:42:42.312981 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:42.316994 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:42:42.319826 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:42:42.322335 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:42:42.617922 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:42:42.623143 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:42:42.639505 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:42:42.645537 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:42:42.659961 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:42.660025 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:42.662533 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:42.685378 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:42.700570 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:42.700135 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:42:42.710915 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:42:42.720541 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:42:42.746162 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:42.758505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:42:42.780578 systemd-networkd[898]: lo: Link UP Jun 25 18:42:42.780588 systemd-networkd[898]: lo: Gained carrier Jun 25 18:42:42.782672 systemd-networkd[898]: Enumeration completed Jun 25 18:42:42.782962 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:42:42.784536 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:42.784541 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:42:42.786138 systemd[1]: Reached target network.target - Network. Jun 25 18:42:42.854379 kernel: mlx5_core a8c9:00:02.0 enP43209s1: Link up Jun 25 18:42:42.888802 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: Data path switched to VF: enP43209s1 Jun 25 18:42:42.888422 systemd-networkd[898]: enP43209s1: Link UP Jun 25 18:42:42.888543 systemd-networkd[898]: eth0: Link UP Jun 25 18:42:42.888702 systemd-networkd[898]: eth0: Gained carrier Jun 25 18:42:42.888714 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:42.899757 systemd-networkd[898]: enP43209s1: Gained carrier Jun 25 18:42:42.932412 systemd-networkd[898]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:42:43.958682 ignition[857]: Ignition 2.19.0 Jun 25 18:42:43.958698 ignition[857]: Stage: fetch-offline Jun 25 18:42:43.960610 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:43.958750 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:43.958763 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:43.958924 ignition[857]: parsed url from cmdline: "" Jun 25 18:42:43.958931 ignition[857]: no config URL provided Jun 25 18:42:43.958941 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:43.976477 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:42:43.958954 ignition[857]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:43.958963 ignition[857]: failed to fetch config: resource requires networking Jun 25 18:42:43.959574 ignition[857]: Ignition finished successfully Jun 25 18:42:43.991956 ignition[907]: Ignition 2.19.0 Jun 25 18:42:43.991966 ignition[907]: Stage: fetch Jun 25 18:42:43.992189 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:43.992199 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:43.992283 ignition[907]: parsed url from cmdline: "" Jun 25 18:42:43.992287 ignition[907]: no config URL provided Jun 25 18:42:43.992291 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:43.992299 ignition[907]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:43.992320 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:42:44.099141 ignition[907]: GET result: OK Jun 25 18:42:44.099304 ignition[907]: config has been read from IMDS userdata Jun 25 18:42:44.099339 ignition[907]: parsing config with SHA512: 0ff1fc9ad4980901e202f1778ace6f6827c2f851d4bfc1f33f267b9054ff4d9d03768a59ab084f1c94238cf7246047c70e4e255a3187919bb59db90a1c754aa9 Jun 25 18:42:44.104474 unknown[907]: fetched base config from "system" Jun 25 18:42:44.104508 unknown[907]: fetched base config from "system" Jun 25 18:42:44.104875 ignition[907]: fetch: fetch complete Jun 25 18:42:44.104516 unknown[907]: fetched user config from "azure" Jun 25 18:42:44.104880 ignition[907]: fetch: fetch passed Jun 25 18:42:44.106492 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:42:44.104929 ignition[907]: Ignition finished successfully Jun 25 18:42:44.115653 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:42:44.131566 ignition[914]: Ignition 2.19.0 Jun 25 18:42:44.131577 ignition[914]: Stage: kargs Jun 25 18:42:44.131813 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.135028 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:42:44.131825 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.132720 ignition[914]: kargs: kargs passed Jun 25 18:42:44.132766 ignition[914]: Ignition finished successfully Jun 25 18:42:44.149528 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:42:44.151590 systemd-networkd[898]: eth0: Gained IPv6LL Jun 25 18:42:44.168058 ignition[921]: Ignition 2.19.0 Jun 25 18:42:44.168067 ignition[921]: Stage: disks Jun 25 18:42:44.169941 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:42:44.168293 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.174115 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:44.168305 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.179018 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:42:44.169131 ignition[921]: disks: disks passed Jun 25 18:42:44.182082 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:42:44.169169 ignition[921]: Ignition finished successfully Jun 25 18:42:44.189374 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:42:44.191756 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:42:44.208504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:42:44.262850 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:42:44.268078 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:42:44.276447 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:42:44.343754 systemd-networkd[898]: enP43209s1: Gained IPv6LL Jun 25 18:42:44.386371 kernel: EXT4-fs (sda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:42:44.386507 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:42:44.391221 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:42:44.433453 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:44.437378 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:42:44.449409 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Jun 25 18:42:44.451548 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:42:44.461424 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:44.466641 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:44.466698 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:44.464988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:42:44.478113 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:44.465031 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:44.481966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:44.484542 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:42:44.495501 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:42:45.180216 coreos-metadata[943]: Jun 25 18:42:45.180 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:42:45.187133 coreos-metadata[943]: Jun 25 18:42:45.187 INFO Fetch successful Jun 25 18:42:45.187133 coreos-metadata[943]: Jun 25 18:42:45.187 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:42:45.207978 coreos-metadata[943]: Jun 25 18:42:45.207 INFO Fetch successful Jun 25 18:42:45.239440 coreos-metadata[943]: Jun 25 18:42:45.239 INFO wrote hostname ci-4012.0.0-a-7f29c71dfa to /sysroot/etc/hostname Jun 25 18:42:45.241254 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:45.337018 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:42:45.358681 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:42:45.380120 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:42:45.387001 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:42:46.152182 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:46.163455 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:42:46.170457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:42:46.178105 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:46.179364 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:42:46.210381 ignition[1063]: INFO : Ignition 2.19.0 Jun 25 18:42:46.210381 ignition[1063]: INFO : Stage: mount Jun 25 18:42:46.210381 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:46.210381 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:46.225952 ignition[1063]: INFO : mount: mount passed Jun 25 18:42:46.225952 ignition[1063]: INFO : Ignition finished successfully Jun 25 18:42:46.215994 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:42:46.222306 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:42:46.235690 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:42:46.247519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:46.257407 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1076) Jun 25 18:42:46.263429 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:46.263478 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:46.265784 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:46.272375 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:46.273536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:46.295201 ignition[1093]: INFO : Ignition 2.19.0 Jun 25 18:42:46.295201 ignition[1093]: INFO : Stage: files Jun 25 18:42:46.299254 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:46.299254 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:46.299254 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:42:46.315805 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:42:46.315805 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:42:46.402746 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:42:46.409678 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:42:46.413654 unknown[1093]: wrote ssh authorized keys file for user: core Jun 25 18:42:46.416286 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:42:46.416286 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:46.416286 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:42:46.676363 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:42:46.774148 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:46.779199 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:46.779199 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:46.787939 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:46.792392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:46.792392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:46.801377 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:46.801377 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:46.801377 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:46.814790 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:46.819160 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:46.823526 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:46.830174 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:46.830174 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:46.830174 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:42:47.386309 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:42:47.680257 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:42:47.680257 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:42:47.701825 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:47.707166 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:47.707166 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:42:47.707166 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:47.723622 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:47.723622 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:47.723622 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:47.723622 ignition[1093]: INFO : files: files passed Jun 25 18:42:47.723622 ignition[1093]: INFO : Ignition finished successfully Jun 25 18:42:47.709132 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:42:47.734301 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:42:47.752516 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:42:47.755560 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:42:47.761501 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:42:47.768448 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:47.768448 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:47.776799 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:47.773269 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:47.779904 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:42:47.798502 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:42:47.824818 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:42:47.824933 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:42:47.832909 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:42:47.838561 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:42:47.846311 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:42:47.858601 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:42:47.873184 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:47.883527 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:42:47.896084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:47.897267 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:47.898049 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:42:47.898879 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:42:47.899016 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:47.899723 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:42:47.900155 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:42:47.900596 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:42:47.901001 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:47.901424 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:47.901847 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:42:47.902248 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:47.903185 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:42:47.903692 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:42:47.904080 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:42:47.904461 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:42:47.904588 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:47.905337 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:47.905772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:47.906289 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:42:47.944694 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:47.955303 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:42:47.955480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:47.967675 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:42:47.967811 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:47.976075 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:42:47.976217 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:42:47.980891 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:42:47.981028 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:48.054364 ignition[1146]: INFO : Ignition 2.19.0 Jun 25 18:42:48.054364 ignition[1146]: INFO : Stage: umount Jun 25 18:42:48.054364 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:48.054364 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:48.054364 ignition[1146]: INFO : umount: umount passed Jun 25 18:42:48.054364 ignition[1146]: INFO : Ignition finished successfully Jun 25 18:42:48.011444 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:42:48.017615 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:42:48.018017 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:48.027525 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:42:48.034094 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:42:48.034279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:48.038676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:42:48.038839 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:48.046592 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:42:48.046701 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:42:48.054619 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:42:48.054706 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:42:48.064112 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:42:48.064165 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:42:48.069490 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:42:48.069539 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:42:48.073608 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:42:48.073655 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:42:48.078038 systemd[1]: Stopped target network.target - Network. Jun 25 18:42:48.080438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:42:48.080497 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:48.085820 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:42:48.090298 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:42:48.095646 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:48.098935 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:42:48.107343 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:42:48.111148 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:42:48.111201 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:48.113821 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:42:48.113873 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:48.119133 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:42:48.119186 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:42:48.123693 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:42:48.123735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:48.129393 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:42:48.136476 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:42:48.141398 systemd-networkd[898]: eth0: DHCPv6 lease lost Jun 25 18:42:48.143556 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:42:48.144181 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:42:48.144296 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:42:48.154087 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:42:48.154241 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:42:48.166292 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:42:48.166383 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:48.180131 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:42:48.185918 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:42:48.187776 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:48.194625 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:42:48.194686 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:48.200195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:42:48.200243 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:48.204898 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:42:48.204949 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:48.212493 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:48.296954 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: Data path switched from VF: enP43209s1 Jun 25 18:42:48.245888 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:42:48.246037 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:48.250920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:42:48.251000 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:48.255332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:42:48.255405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:48.262426 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:42:48.262476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:48.263261 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:42:48.263295 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:48.264090 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:48.264128 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:48.304513 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:42:48.310781 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:42:48.310891 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:48.316832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:48.316893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:48.333183 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:42:48.333311 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:42:48.337238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:42:48.337336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:42:49.421204 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:42:49.421402 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:42:49.428897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:42:49.431707 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:42:49.431776 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:49.447522 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:42:49.456020 systemd[1]: Switching root. Jun 25 18:42:49.511950 systemd-journald[176]: Journal stopped Jun 25 18:42:55.309052 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jun 25 18:42:55.309091 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:42:55.309107 kernel: SELinux: policy capability open_perms=1 Jun 25 18:42:55.309119 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:42:55.309131 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:42:55.309143 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:42:55.309157 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:42:55.309170 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:42:55.309181 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:42:55.309193 kernel: audit: type=1403 audit(1719340971.980:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:42:55.309205 systemd[1]: Successfully loaded SELinux policy in 151.324ms. Jun 25 18:42:55.309215 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.679ms. Jun 25 18:42:55.309229 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:42:55.309239 systemd[1]: Detected virtualization microsoft. Jun 25 18:42:55.309254 systemd[1]: Detected architecture x86-64. Jun 25 18:42:55.309265 systemd[1]: Detected first boot. Jun 25 18:42:55.309278 systemd[1]: Hostname set to . Jun 25 18:42:55.309289 systemd[1]: Initializing machine ID from random generator. Jun 25 18:42:55.309301 zram_generator::config[1189]: No configuration found. Jun 25 18:42:55.309316 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:42:55.309328 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:42:55.309341 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:42:55.309368 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:42:55.309379 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:42:55.309388 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:42:55.309402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:42:55.309414 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:42:55.309427 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:42:55.309438 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:42:55.309451 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:42:55.309460 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:42:55.309473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:55.309483 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:55.309495 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:42:55.309508 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:42:55.309521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:42:55.309531 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:42:55.309543 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:42:55.309553 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:55.309568 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:42:55.309585 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:42:55.309598 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:42:55.309613 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:42:55.309625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:55.309637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:42:55.309650 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:42:55.309660 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:42:55.309673 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:42:55.309683 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:42:55.309698 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:55.309709 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:55.309722 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:55.309733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:42:55.309745 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:42:55.309760 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:42:55.309770 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:42:55.309783 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:55.309794 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:42:55.309807 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:42:55.309817 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:42:55.309831 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:42:55.309841 systemd[1]: Reached target machines.target - Containers. Jun 25 18:42:55.309856 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:42:55.309873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:42:55.309884 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:42:55.309897 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:42:55.309908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:42:55.309920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:42:55.309931 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:42:55.309944 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:42:55.309954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:42:55.309969 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:42:55.309981 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:42:55.309993 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:42:55.310005 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:42:55.310019 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:42:55.310029 kernel: loop: module loaded Jun 25 18:42:55.310041 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:42:55.310053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:42:55.310068 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:42:55.310079 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:42:55.310091 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:42:55.310102 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:42:55.310114 systemd[1]: Stopped verity-setup.service. Jun 25 18:42:55.310127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:55.310138 kernel: ACPI: bus type drm_connector registered Jun 25 18:42:55.310168 systemd-journald[1294]: Collecting audit messages is disabled. Jun 25 18:42:55.310197 kernel: fuse: init (API version 7.39) Jun 25 18:42:55.310209 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:42:55.310220 systemd-journald[1294]: Journal started Jun 25 18:42:55.310246 systemd-journald[1294]: Runtime Journal (/run/log/journal/6bbaaf6ff5454706bb784d086c64e0d6) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:42:54.444790 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:42:54.585920 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 18:42:54.586294 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:42:55.321021 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:42:55.321645 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:42:55.324460 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:42:55.326977 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:42:55.329745 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:42:55.332670 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:42:55.335258 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:42:55.338294 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:55.341699 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:42:55.341854 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:42:55.344854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:42:55.345004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:42:55.348184 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:42:55.348384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:42:55.351481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:42:55.351656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:42:55.357023 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:42:55.357207 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:42:55.360524 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:42:55.360715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:42:55.364643 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:42:55.368380 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:42:55.388921 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:42:55.400436 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:42:55.411483 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:42:55.415435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:42:55.415481 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:42:55.420468 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:42:55.429519 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:42:55.433663 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:42:55.436281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:42:55.439549 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:42:55.446089 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:42:55.449436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:42:55.453586 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:42:55.460748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:42:55.468546 systemd-journald[1294]: Time spent on flushing to /var/log/journal/6bbaaf6ff5454706bb784d086c64e0d6 is 18.173ms for 952 entries. Jun 25 18:42:55.468546 systemd-journald[1294]: System Journal (/var/log/journal/6bbaaf6ff5454706bb784d086c64e0d6) is 8.0M, max 2.6G, 2.6G free. Jun 25 18:42:55.501986 systemd-journald[1294]: Received client request to flush runtime journal. Jun 25 18:42:55.464514 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:42:55.478087 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:42:55.482892 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:55.488396 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:55.491794 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:42:55.495528 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:42:55.500901 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:42:55.507830 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:42:55.531246 kernel: loop0: detected capacity change from 0 to 209816 Jun 25 18:42:55.531288 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:42:55.521292 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:42:55.527827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:42:55.540545 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:42:55.548477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:42:55.588922 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:42:55.554652 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:42:55.592531 udevadm[1338]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:42:55.665378 kernel: loop1: detected capacity change from 0 to 80568 Jun 25 18:42:55.785077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:42:55.785772 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:42:55.806382 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:42:55.814925 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:42:55.847800 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:55.911848 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jun 25 18:42:55.911871 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jun 25 18:42:55.916665 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:56.117377 kernel: loop2: detected capacity change from 0 to 139760 Jun 25 18:42:56.799266 kernel: loop3: detected capacity change from 0 to 62456 Jun 25 18:42:57.063805 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:42:57.071612 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:57.097501 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Jun 25 18:42:57.239376 kernel: loop4: detected capacity change from 0 to 209816 Jun 25 18:42:57.254389 kernel: loop5: detected capacity change from 0 to 80568 Jun 25 18:42:57.263373 kernel: loop6: detected capacity change from 0 to 139760 Jun 25 18:42:57.277409 kernel: loop7: detected capacity change from 0 to 62456 Jun 25 18:42:57.282719 (sd-merge)[1352]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 18:42:57.283338 (sd-merge)[1352]: Merged extensions into '/usr'. Jun 25 18:42:57.287094 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:42:57.287108 systemd[1]: Reloading... Jun 25 18:42:57.356377 zram_generator::config[1377]: No configuration found. Jun 25 18:42:57.446374 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Jun 25 18:42:57.551458 kernel: hv_vmbus: registering driver hv_balloon Jun 25 18:42:57.556427 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 18:42:57.612529 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:42:57.612609 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 18:42:57.618409 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 18:42:57.618484 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 18:42:57.624367 kernel: Console: switching to colour dummy device 80x25 Jun 25 18:42:57.635511 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:42:57.790983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:42:57.861378 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1397) Jun 25 18:42:57.939968 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:42:57.941533 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 25 18:42:57.942700 systemd[1]: Reloading finished in 655 ms. Jun 25 18:42:57.987021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:57.995995 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:42:58.049546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:42:58.064625 systemd[1]: Starting ensure-sysext.service... Jun 25 18:42:58.069808 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:42:58.077037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:42:58.085459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:42:58.100628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:58.113210 systemd[1]: Reloading requested from client PID 1507 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:42:58.113231 systemd[1]: Reloading... Jun 25 18:42:58.141550 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:42:58.142095 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:42:58.144914 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:42:58.146288 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Jun 25 18:42:58.146524 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Jun 25 18:42:58.153476 systemd-tmpfiles[1510]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:42:58.153594 systemd-tmpfiles[1510]: Skipping /boot Jun 25 18:42:58.189847 systemd-tmpfiles[1510]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:42:58.190806 systemd-tmpfiles[1510]: Skipping /boot Jun 25 18:42:58.238375 zram_generator::config[1551]: No configuration found. Jun 25 18:42:58.347811 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:42:58.425666 systemd[1]: Reloading finished in 311 ms. Jun 25 18:42:58.445375 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:42:58.454918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:42:58.456322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:58.466062 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:42:58.476580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:42:58.481484 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:42:58.497467 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:42:58.505534 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:42:58.512105 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:42:58.525554 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:42:58.540334 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.540616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:42:58.548602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:42:58.561599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:42:58.577615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:42:58.580701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:42:58.580870 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.582778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:42:58.583466 lvm[1608]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:42:58.583517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:42:58.605264 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:42:58.610873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:42:58.612395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:42:58.618948 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.620321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:42:58.630640 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:42:58.636929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:42:58.637098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:42:58.637229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.641414 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:42:58.653454 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:42:58.657941 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:42:58.658113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:42:58.680567 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:58.684670 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.685195 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:42:58.697765 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:42:58.707508 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:42:58.715082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:42:58.719636 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:42:58.722825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:42:58.723088 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:42:58.725966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.747175 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:42:58.747597 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:42:58.758495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:42:58.759416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:42:58.764285 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:42:58.765426 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:42:58.770098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:42:58.770588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:42:58.782272 systemd[1]: Finished ensure-sysext.service. Jun 25 18:42:58.794154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:42:58.798319 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:42:58.798956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:42:58.802817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:58.804170 systemd-networkd[1509]: lo: Link UP Jun 25 18:42:58.804437 systemd-networkd[1509]: lo: Gained carrier Jun 25 18:42:58.806642 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:42:58.807388 systemd-networkd[1509]: Enumeration completed Jun 25 18:42:58.807810 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:58.807886 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:42:58.810681 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:42:58.819603 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:42:58.823269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:42:58.832715 augenrules[1658]: No rules Jun 25 18:42:58.834013 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:42:58.847815 systemd-resolved[1611]: Positive Trust Anchors: Jun 25 18:42:58.847831 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:42:58.847881 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:42:58.866185 systemd-resolved[1611]: Using system hostname 'ci-4012.0.0-a-7f29c71dfa'. Jun 25 18:42:58.871373 kernel: mlx5_core a8c9:00:02.0 enP43209s1: Link up Jun 25 18:42:58.891522 kernel: hv_netvsc 000d3ad8-bf2f-000d-3ad8-bf2f000d3ad8 eth0: Data path switched to VF: enP43209s1 Jun 25 18:42:58.892299 systemd-networkd[1509]: enP43209s1: Link UP Jun 25 18:42:58.892471 systemd-networkd[1509]: eth0: Link UP Jun 25 18:42:58.892476 systemd-networkd[1509]: eth0: Gained carrier Jun 25 18:42:58.892500 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:58.895309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:42:58.896670 systemd-networkd[1509]: enP43209s1: Gained carrier Jun 25 18:42:58.898692 systemd[1]: Reached target network.target - Network. Jun 25 18:42:58.901161 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:58.930423 systemd-networkd[1509]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:42:58.969362 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:42:58.974924 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:43:00.215490 systemd-networkd[1509]: eth0: Gained IPv6LL Jun 25 18:43:00.218583 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:43:00.222149 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:43:00.279519 systemd-networkd[1509]: enP43209s1: Gained IPv6LL Jun 25 18:43:02.745585 ldconfig[1318]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:43:02.755074 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:43:02.766527 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:43:02.778792 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:43:02.781795 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:02.784479 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:43:02.787797 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:43:02.791047 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:43:02.793714 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:43:02.796925 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:43:02.802011 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:43:02.802064 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:02.804082 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:02.826816 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:43:02.832243 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:43:02.843271 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:43:02.846798 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:43:02.849737 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:02.852245 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:02.854552 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:02.854585 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:02.866681 systemd[1]: Starting chronyd.service - NTP client/server... Jun 25 18:43:02.872470 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:43:02.877659 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:43:02.886528 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:43:02.898483 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:43:02.907741 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:43:02.910494 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:43:02.913470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:02.920327 jq[1677]: false Jun 25 18:43:02.925285 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:43:02.935544 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:43:02.940718 (chronyd)[1671]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 25 18:43:02.951469 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:43:02.957549 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:43:02.965056 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:43:02.975388 chronyd[1691]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 25 18:43:02.980551 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:43:02.986272 chronyd[1691]: Timezone right/UTC failed leap second check, ignoring Jun 25 18:43:02.986444 chronyd[1691]: Loaded seccomp filter (level 2) Jun 25 18:43:02.987624 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:43:02.988211 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:43:02.992569 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:43:03.002491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:43:03.009643 extend-filesystems[1678]: Found loop4 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found loop5 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found loop6 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found loop7 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found sda Jun 25 18:43:03.015415 extend-filesystems[1678]: Found sda1 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found sda2 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found sda3 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found usr Jun 25 18:43:03.015415 extend-filesystems[1678]: Found sda4 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found sda6 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found sda7 Jun 25 18:43:03.015415 extend-filesystems[1678]: Found sda9 Jun 25 18:43:03.015415 extend-filesystems[1678]: Checking size of /dev/sda9 Jun 25 18:43:03.013097 systemd[1]: Started chronyd.service - NTP client/server. Jun 25 18:43:03.164075 extend-filesystems[1678]: Old size kept for /dev/sda9 Jun 25 18:43:03.164075 extend-filesystems[1678]: Found sr0 Jun 25 18:43:03.018965 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:43:03.179843 jq[1699]: true Jun 25 18:43:03.167135 dbus-daemon[1674]: [system] SELinux support is enabled Jun 25 18:43:03.020917 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:43:03.026708 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:43:03.026920 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:43:03.188000 tar[1707]: linux-amd64/helm Jun 25 18:43:03.042835 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:43:03.043430 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:43:03.119307 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:43:03.188706 jq[1708]: true Jun 25 18:43:03.119342 (ntainerd)[1720]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:43:03.130226 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:43:03.130461 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:43:03.167321 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:43:03.195521 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:43:03.195566 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:43:03.201265 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:43:03.201299 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:43:03.210925 systemd-logind[1695]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:43:03.214540 systemd-logind[1695]: New seat seat0. Jun 25 18:43:03.215273 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:43:03.219454 update_engine[1698]: I0625 18:43:03.218293 1698 main.cc:92] Flatcar Update Engine starting Jun 25 18:43:03.226741 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:43:03.230668 update_engine[1698]: I0625 18:43:03.229527 1698 update_check_scheduler.cc:74] Next update check in 10m1s Jun 25 18:43:03.239634 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:43:03.397388 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1754) Jun 25 18:43:03.429072 coreos-metadata[1673]: Jun 25 18:43:03.429 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:43:03.436372 coreos-metadata[1673]: Jun 25 18:43:03.433 INFO Fetch successful Jun 25 18:43:03.436372 coreos-metadata[1673]: Jun 25 18:43:03.433 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 18:43:03.439332 coreos-metadata[1673]: Jun 25 18:43:03.438 INFO Fetch successful Jun 25 18:43:03.439460 bash[1750]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:43:03.440785 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:43:03.442968 coreos-metadata[1673]: Jun 25 18:43:03.442 INFO Fetching http://168.63.129.16/machine/30d9eda6-57c1-4a81-8379-3dea9cbc5e8a/2a3bfdc0%2De805%2D4566%2Db58d%2Dd683c6125667.%5Fci%2D4012.0.0%2Da%2D7f29c71dfa?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 18:43:03.450957 coreos-metadata[1673]: Jun 25 18:43:03.450 INFO Fetch successful Jun 25 18:43:03.451400 coreos-metadata[1673]: Jun 25 18:43:03.451 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:43:03.453088 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:43:03.472444 coreos-metadata[1673]: Jun 25 18:43:03.472 INFO Fetch successful Jun 25 18:43:03.571646 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:43:03.575554 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:43:03.664143 sshd_keygen[1701]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:43:03.739713 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:43:03.749674 locksmithd[1734]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:43:03.753626 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:43:03.765376 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 18:43:03.797539 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 18:43:03.808747 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:43:03.809344 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:43:03.826643 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:43:03.862248 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:43:03.877596 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:43:03.891167 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:43:03.898989 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:43:04.149246 containerd[1720]: time="2024-06-25T18:43:04.149107300Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:43:04.176484 tar[1707]: linux-amd64/LICENSE Jun 25 18:43:04.176484 tar[1707]: linux-amd64/README.md Jun 25 18:43:04.191694 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:43:04.203055 containerd[1720]: time="2024-06-25T18:43:04.203005700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:43:04.203135 containerd[1720]: time="2024-06-25T18:43:04.203071500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:04.204753 containerd[1720]: time="2024-06-25T18:43:04.204696000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:04.204753 containerd[1720]: time="2024-06-25T18:43:04.204731900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:04.204984 containerd[1720]: time="2024-06-25T18:43:04.204952400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:04.204984 containerd[1720]: time="2024-06-25T18:43:04.204977500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:43:04.205092 containerd[1720]: time="2024-06-25T18:43:04.205072300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:04.205162 containerd[1720]: time="2024-06-25T18:43:04.205137700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:04.205201 containerd[1720]: time="2024-06-25T18:43:04.205159200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:04.205922 containerd[1720]: time="2024-06-25T18:43:04.205262500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:04.205922 containerd[1720]: time="2024-06-25T18:43:04.205498400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:04.205922 containerd[1720]: time="2024-06-25T18:43:04.205523200Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:43:04.205922 containerd[1720]: time="2024-06-25T18:43:04.205537600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:04.205922 containerd[1720]: time="2024-06-25T18:43:04.205672100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:04.205922 containerd[1720]: time="2024-06-25T18:43:04.205691200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:43:04.205922 containerd[1720]: time="2024-06-25T18:43:04.205746600Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:43:04.205922 containerd[1720]: time="2024-06-25T18:43:04.205761400Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:43:04.234731 containerd[1720]: time="2024-06-25T18:43:04.234686500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:43:04.234853 containerd[1720]: time="2024-06-25T18:43:04.234762200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:43:04.234853 containerd[1720]: time="2024-06-25T18:43:04.234787600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:43:04.234853 containerd[1720]: time="2024-06-25T18:43:04.234837000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:43:04.234968 containerd[1720]: time="2024-06-25T18:43:04.234857100Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:43:04.234968 containerd[1720]: time="2024-06-25T18:43:04.234871900Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:43:04.234968 containerd[1720]: time="2024-06-25T18:43:04.234933400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:43:04.235120 containerd[1720]: time="2024-06-25T18:43:04.235095100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:43:04.235193 containerd[1720]: time="2024-06-25T18:43:04.235121000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:43:04.235193 containerd[1720]: time="2024-06-25T18:43:04.235138600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:43:04.235193 containerd[1720]: time="2024-06-25T18:43:04.235159500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235332600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235385100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235405200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235425200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235446500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235464900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235483200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235499200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:43:04.235812 containerd[1720]: time="2024-06-25T18:43:04.235638300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:43:04.236136 containerd[1720]: time="2024-06-25T18:43:04.235949900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:43:04.236136 containerd[1720]: time="2024-06-25T18:43:04.235988400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236136 containerd[1720]: time="2024-06-25T18:43:04.236008600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:43:04.236136 containerd[1720]: time="2024-06-25T18:43:04.236056800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:43:04.236136 containerd[1720]: time="2024-06-25T18:43:04.236123800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236317 containerd[1720]: time="2024-06-25T18:43:04.236141700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236317 containerd[1720]: time="2024-06-25T18:43:04.236161500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236317 containerd[1720]: time="2024-06-25T18:43:04.236178400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236317 containerd[1720]: time="2024-06-25T18:43:04.236196900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236317 containerd[1720]: time="2024-06-25T18:43:04.236214900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236317 containerd[1720]: time="2024-06-25T18:43:04.236231900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236317 containerd[1720]: time="2024-06-25T18:43:04.236249000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236317 containerd[1720]: time="2024-06-25T18:43:04.236269700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:43:04.236607 containerd[1720]: time="2024-06-25T18:43:04.236443200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236607 containerd[1720]: time="2024-06-25T18:43:04.236467300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236607 containerd[1720]: time="2024-06-25T18:43:04.236488000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236607 containerd[1720]: time="2024-06-25T18:43:04.236506800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236607 containerd[1720]: time="2024-06-25T18:43:04.236524600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.236607 containerd[1720]: time="2024-06-25T18:43:04.236543700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.238004 containerd[1720]: time="2024-06-25T18:43:04.236562100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.239363 containerd[1720]: time="2024-06-25T18:43:04.236893000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:43:04.239433 containerd[1720]: time="2024-06-25T18:43:04.239248400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:43:04.239433 containerd[1720]: time="2024-06-25T18:43:04.239378000Z" level=info msg="Connect containerd service" Jun 25 18:43:04.239662 containerd[1720]: time="2024-06-25T18:43:04.239429000Z" level=info msg="using legacy CRI server" Jun 25 18:43:04.239662 containerd[1720]: time="2024-06-25T18:43:04.239456000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:43:04.239662 containerd[1720]: time="2024-06-25T18:43:04.239635700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:43:04.241532 containerd[1720]: time="2024-06-25T18:43:04.241117000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:43:04.241532 containerd[1720]: time="2024-06-25T18:43:04.241178300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:43:04.241532 containerd[1720]: time="2024-06-25T18:43:04.241205400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:43:04.241532 containerd[1720]: time="2024-06-25T18:43:04.241221200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:43:04.241532 containerd[1720]: time="2024-06-25T18:43:04.241238500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:43:04.241794 containerd[1720]: time="2024-06-25T18:43:04.241616200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:43:04.241794 containerd[1720]: time="2024-06-25T18:43:04.241677000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:43:04.241794 containerd[1720]: time="2024-06-25T18:43:04.241712900Z" level=info msg="Start subscribing containerd event" Jun 25 18:43:04.241794 containerd[1720]: time="2024-06-25T18:43:04.241755000Z" level=info msg="Start recovering state" Jun 25 18:43:04.244944 containerd[1720]: time="2024-06-25T18:43:04.241828100Z" level=info msg="Start event monitor" Jun 25 18:43:04.244944 containerd[1720]: time="2024-06-25T18:43:04.241842200Z" level=info msg="Start snapshots syncer" Jun 25 18:43:04.244944 containerd[1720]: time="2024-06-25T18:43:04.241853800Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:43:04.244944 containerd[1720]: time="2024-06-25T18:43:04.241864300Z" level=info msg="Start streaming server" Jun 25 18:43:04.244944 containerd[1720]: time="2024-06-25T18:43:04.241944900Z" level=info msg="containerd successfully booted in 0.096770s" Jun 25 18:43:04.242045 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:43:04.507381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:04.512059 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:43:04.513299 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:04.516974 systemd[1]: Startup finished in 959ms (firmware) + 28.507s (loader) + 984ms (kernel) + 13.339s (initrd) + 12.685s (userspace) = 56.476s. Jun 25 18:43:04.779079 login[1817]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:43:04.781865 login[1818]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:43:04.793508 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:43:04.801609 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:43:04.806436 systemd-logind[1695]: New session 2 of user core. Jun 25 18:43:04.813311 systemd-logind[1695]: New session 1 of user core. Jun 25 18:43:04.821188 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:43:04.826677 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:43:04.838936 (systemd)[1846]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:04.994789 systemd[1846]: Queued start job for default target default.target. Jun 25 18:43:05.000688 systemd[1846]: Created slice app.slice - User Application Slice. Jun 25 18:43:05.000726 systemd[1846]: Reached target paths.target - Paths. Jun 25 18:43:05.000742 systemd[1846]: Reached target timers.target - Timers. Jun 25 18:43:05.002680 systemd[1846]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:43:05.016409 systemd[1846]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:43:05.016538 systemd[1846]: Reached target sockets.target - Sockets. Jun 25 18:43:05.016558 systemd[1846]: Reached target basic.target - Basic System. Jun 25 18:43:05.016600 systemd[1846]: Reached target default.target - Main User Target. Jun 25 18:43:05.016635 systemd[1846]: Startup finished in 170ms. Jun 25 18:43:05.016741 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:43:05.022549 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:43:05.023650 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:43:05.283176 kubelet[1835]: E0625 18:43:05.283045 1835 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:05.286737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:05.286964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:06.028342 waagent[1813]: 2024-06-25T18:43:06.028238Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.029642Z INFO Daemon Daemon OS: flatcar 4012.0.0 Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.030132Z INFO Daemon Daemon Python: 3.11.9 Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.030925Z INFO Daemon Daemon Run daemon Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.031972Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4012.0.0' Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.032863Z INFO Daemon Daemon Using waagent for provisioning Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.033959Z INFO Daemon Daemon Activate resource disk Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.034272Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.038638Z INFO Daemon Daemon Found device: None Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.038770Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.039190Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.041638Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:43:06.044084 waagent[1813]: 2024-06-25T18:43:06.042567Z INFO Daemon Daemon Running default provisioning handler Jun 25 18:43:06.061120 waagent[1813]: 2024-06-25T18:43:06.061038Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 25 18:43:06.062798 waagent[1813]: 2024-06-25T18:43:06.062742Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 18:43:06.062953 waagent[1813]: 2024-06-25T18:43:06.062913Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 18:43:06.063058 waagent[1813]: 2024-06-25T18:43:06.063023Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 18:43:06.165391 waagent[1813]: 2024-06-25T18:43:06.163591Z INFO Daemon Daemon Successfully mounted dvd Jun 25 18:43:06.177028 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 18:43:06.179864 waagent[1813]: 2024-06-25T18:43:06.179807Z INFO Daemon Daemon Detect protocol endpoint Jun 25 18:43:06.195286 waagent[1813]: 2024-06-25T18:43:06.180934Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:43:06.195286 waagent[1813]: 2024-06-25T18:43:06.183695Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 18:43:06.195286 waagent[1813]: 2024-06-25T18:43:06.184377Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 18:43:06.195286 waagent[1813]: 2024-06-25T18:43:06.185205Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 18:43:06.195286 waagent[1813]: 2024-06-25T18:43:06.185970Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 18:43:06.227271 waagent[1813]: 2024-06-25T18:43:06.227217Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 18:43:06.235270 waagent[1813]: 2024-06-25T18:43:06.228577Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 18:43:06.235270 waagent[1813]: 2024-06-25T18:43:06.228985Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 18:43:06.319971 waagent[1813]: 2024-06-25T18:43:06.319816Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 18:43:06.323251 waagent[1813]: 2024-06-25T18:43:06.323180Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 18:43:06.330162 waagent[1813]: 2024-06-25T18:43:06.330092Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:43:06.343189 waagent[1813]: 2024-06-25T18:43:06.343132Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 18:43:06.357132 waagent[1813]: 2024-06-25T18:43:06.344718Z INFO Daemon Jun 25 18:43:06.357132 waagent[1813]: 2024-06-25T18:43:06.346473Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9cd951dc-d1a9-4261-8314-d50617e4ee85 eTag: 16100076567827295527 source: Fabric] Jun 25 18:43:06.357132 waagent[1813]: 2024-06-25T18:43:06.347974Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 18:43:06.357132 waagent[1813]: 2024-06-25T18:43:06.349032Z INFO Daemon Jun 25 18:43:06.357132 waagent[1813]: 2024-06-25T18:43:06.349834Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:43:06.357132 waagent[1813]: 2024-06-25T18:43:06.353878Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 18:43:06.431254 waagent[1813]: 2024-06-25T18:43:06.431165Z INFO Daemon Downloaded certificate {'thumbprint': 'C2031271468162C66E6A08570524E2BF7C2CB811', 'hasPrivateKey': False} Jun 25 18:43:06.435608 waagent[1813]: 2024-06-25T18:43:06.435542Z INFO Daemon Downloaded certificate {'thumbprint': '76303BA7F18694F11951F99892AE53A613A7AD4D', 'hasPrivateKey': True} Jun 25 18:43:06.436099 waagent[1813]: 2024-06-25T18:43:06.436047Z INFO Daemon Fetch goal state completed Jun 25 18:43:06.453728 waagent[1813]: 2024-06-25T18:43:06.444290Z INFO Daemon Daemon Starting provisioning Jun 25 18:43:06.453728 waagent[1813]: 2024-06-25T18:43:06.445095Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 18:43:06.453728 waagent[1813]: 2024-06-25T18:43:06.445933Z INFO Daemon Daemon Set hostname [ci-4012.0.0-a-7f29c71dfa] Jun 25 18:43:06.469490 waagent[1813]: 2024-06-25T18:43:06.469418Z INFO Daemon Daemon Publish hostname [ci-4012.0.0-a-7f29c71dfa] Jun 25 18:43:06.477164 waagent[1813]: 2024-06-25T18:43:06.470768Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 18:43:06.477164 waagent[1813]: 2024-06-25T18:43:06.471163Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 18:43:06.495709 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:06.495719 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:06.495769 systemd-networkd[1509]: eth0: DHCP lease lost Jun 25 18:43:06.497140 waagent[1813]: 2024-06-25T18:43:06.497055Z INFO Daemon Daemon Create user account if not exists Jun 25 18:43:06.515550 waagent[1813]: 2024-06-25T18:43:06.498781Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 18:43:06.515550 waagent[1813]: 2024-06-25T18:43:06.499592Z INFO Daemon Daemon Configure sudoer Jun 25 18:43:06.515550 waagent[1813]: 2024-06-25T18:43:06.500389Z INFO Daemon Daemon Configure sshd Jun 25 18:43:06.515550 waagent[1813]: 2024-06-25T18:43:06.501581Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 18:43:06.515550 waagent[1813]: 2024-06-25T18:43:06.502288Z INFO Daemon Daemon Deploy ssh public key. Jun 25 18:43:06.516474 systemd-networkd[1509]: eth0: DHCPv6 lease lost Jun 25 18:43:06.551468 systemd-networkd[1509]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:43:07.797379 waagent[1813]: 2024-06-25T18:43:07.797242Z INFO Daemon Daemon Provisioning complete Jun 25 18:43:07.811685 waagent[1813]: 2024-06-25T18:43:07.811621Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 18:43:07.818247 waagent[1813]: 2024-06-25T18:43:07.812698Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 18:43:07.818247 waagent[1813]: 2024-06-25T18:43:07.813437Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 18:43:07.938721 waagent[1897]: 2024-06-25T18:43:07.938622Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 18:43:07.939177 waagent[1897]: 2024-06-25T18:43:07.938791Z INFO ExtHandler ExtHandler OS: flatcar 4012.0.0 Jun 25 18:43:07.939177 waagent[1897]: 2024-06-25T18:43:07.938871Z INFO ExtHandler ExtHandler Python: 3.11.9 Jun 25 18:43:07.981973 waagent[1897]: 2024-06-25T18:43:07.981877Z INFO ExtHandler ExtHandler Distro: flatcar-4012.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 18:43:07.982205 waagent[1897]: 2024-06-25T18:43:07.982153Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:07.982301 waagent[1897]: 2024-06-25T18:43:07.982259Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:07.990158 waagent[1897]: 2024-06-25T18:43:07.990086Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:43:07.995579 waagent[1897]: 2024-06-25T18:43:07.995529Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 18:43:07.996026 waagent[1897]: 2024-06-25T18:43:07.995971Z INFO ExtHandler Jun 25 18:43:07.996101 waagent[1897]: 2024-06-25T18:43:07.996059Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5f80875b-f081-49be-a4c1-d8638f671e84 eTag: 16100076567827295527 source: Fabric] Jun 25 18:43:07.996438 waagent[1897]: 2024-06-25T18:43:07.996391Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 18:43:07.996995 waagent[1897]: 2024-06-25T18:43:07.996938Z INFO ExtHandler Jun 25 18:43:07.997059 waagent[1897]: 2024-06-25T18:43:07.997020Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:43:08.000663 waagent[1897]: 2024-06-25T18:43:08.000625Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 18:43:08.069601 waagent[1897]: 2024-06-25T18:43:08.069458Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C2031271468162C66E6A08570524E2BF7C2CB811', 'hasPrivateKey': False} Jun 25 18:43:08.069984 waagent[1897]: 2024-06-25T18:43:08.069931Z INFO ExtHandler Downloaded certificate {'thumbprint': '76303BA7F18694F11951F99892AE53A613A7AD4D', 'hasPrivateKey': True} Jun 25 18:43:08.070443 waagent[1897]: 2024-06-25T18:43:08.070394Z INFO ExtHandler Fetch goal state completed Jun 25 18:43:08.085735 waagent[1897]: 2024-06-25T18:43:08.085665Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1897 Jun 25 18:43:08.085895 waagent[1897]: 2024-06-25T18:43:08.085846Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 18:43:08.088616 waagent[1897]: 2024-06-25T18:43:08.087412Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4012.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 18:43:08.088616 waagent[1897]: 2024-06-25T18:43:08.087891Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 18:43:08.617371 waagent[1897]: 2024-06-25T18:43:08.617292Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 18:43:08.617640 waagent[1897]: 2024-06-25T18:43:08.617581Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 18:43:08.624771 waagent[1897]: 2024-06-25T18:43:08.624398Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 18:43:08.631559 systemd[1]: Reloading requested from client PID 1912 ('systemctl') (unit waagent.service)... Jun 25 18:43:08.631578 systemd[1]: Reloading... Jun 25 18:43:08.729454 zram_generator::config[1946]: No configuration found. Jun 25 18:43:09.548968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:09.627503 systemd[1]: Reloading finished in 995 ms. Jun 25 18:43:09.769931 waagent[1897]: 2024-06-25T18:43:09.654563Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 18:43:09.777620 systemd[1]: Reloading requested from client PID 2000 ('systemctl') (unit waagent.service)... Jun 25 18:43:09.777635 systemd[1]: Reloading... Jun 25 18:43:09.856430 zram_generator::config[2031]: No configuration found. Jun 25 18:43:09.969216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:10.045086 systemd[1]: Reloading finished in 267 ms. Jun 25 18:43:10.070380 waagent[1897]: 2024-06-25T18:43:10.069630Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 18:43:10.071532 waagent[1897]: 2024-06-25T18:43:10.070655Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 18:43:10.997624 waagent[1897]: 2024-06-25T18:43:10.997528Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 18:43:10.998452 waagent[1897]: 2024-06-25T18:43:10.998331Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 18:43:10.999752 waagent[1897]: 2024-06-25T18:43:10.999662Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 18:43:10.999884 waagent[1897]: 2024-06-25T18:43:10.999826Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:11.000018 waagent[1897]: 2024-06-25T18:43:10.999966Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:11.000321 waagent[1897]: 2024-06-25T18:43:11.000264Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 18:43:11.001327 waagent[1897]: 2024-06-25T18:43:11.001280Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 18:43:11.001441 waagent[1897]: 2024-06-25T18:43:11.001385Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 18:43:11.001441 waagent[1897]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 18:43:11.001441 waagent[1897]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 18:43:11.001441 waagent[1897]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 18:43:11.001441 waagent[1897]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:11.001441 waagent[1897]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:11.001441 waagent[1897]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:11.001741 waagent[1897]: 2024-06-25T18:43:11.001539Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:11.001741 waagent[1897]: 2024-06-25T18:43:11.001635Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:11.001867 waagent[1897]: 2024-06-25T18:43:11.001805Z INFO EnvHandler ExtHandler Configure routes Jun 25 18:43:11.002302 waagent[1897]: 2024-06-25T18:43:11.002246Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 18:43:11.002480 waagent[1897]: 2024-06-25T18:43:11.002431Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 18:43:11.002714 waagent[1897]: 2024-06-25T18:43:11.002664Z INFO EnvHandler ExtHandler Gateway:None Jun 25 18:43:11.002811 waagent[1897]: 2024-06-25T18:43:11.002771Z INFO EnvHandler ExtHandler Routes:None Jun 25 18:43:11.003165 waagent[1897]: 2024-06-25T18:43:11.003108Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 18:43:11.003344 waagent[1897]: 2024-06-25T18:43:11.003302Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 18:43:11.003512 waagent[1897]: 2024-06-25T18:43:11.003427Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 18:43:11.013153 waagent[1897]: 2024-06-25T18:43:11.013106Z INFO ExtHandler ExtHandler Jun 25 18:43:11.013245 waagent[1897]: 2024-06-25T18:43:11.013203Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 600e83c9-95b5-45f5-bb69-16a9ecdf4cc3 correlation 6fec4f2a-da36-442f-a40c-f89c54300559 created: 2024-06-25T18:41:56.843124Z] Jun 25 18:43:11.013646 waagent[1897]: 2024-06-25T18:43:11.013598Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 18:43:11.014170 waagent[1897]: 2024-06-25T18:43:11.014126Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jun 25 18:43:11.046039 waagent[1897]: 2024-06-25T18:43:11.045977Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 942B47AB-5DF2-419A-A194-F45939A381C9;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 18:43:11.064056 waagent[1897]: 2024-06-25T18:43:11.063981Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 18:43:11.064056 waagent[1897]: Executing ['ip', '-a', '-o', 'link']: Jun 25 18:43:11.064056 waagent[1897]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 18:43:11.064056 waagent[1897]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d8:bf:2f brd ff:ff:ff:ff:ff:ff Jun 25 18:43:11.064056 waagent[1897]: 3: enP43209s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d8:bf:2f brd ff:ff:ff:ff:ff:ff\ altname enP43209p0s2 Jun 25 18:43:11.064056 waagent[1897]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 18:43:11.064056 waagent[1897]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 18:43:11.064056 waagent[1897]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 18:43:11.064056 waagent[1897]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 18:43:11.064056 waagent[1897]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 25 18:43:11.064056 waagent[1897]: 2: eth0 inet6 fe80::20d:3aff:fed8:bf2f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:43:11.064056 waagent[1897]: 3: enP43209s1 inet6 fe80::20d:3aff:fed8:bf2f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:43:11.103478 waagent[1897]: 2024-06-25T18:43:11.103410Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 18:43:11.103478 waagent[1897]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:11.103478 waagent[1897]: pkts bytes target prot opt in out source destination Jun 25 18:43:11.103478 waagent[1897]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:11.103478 waagent[1897]: pkts bytes target prot opt in out source destination Jun 25 18:43:11.103478 waagent[1897]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:11.103478 waagent[1897]: pkts bytes target prot opt in out source destination Jun 25 18:43:11.103478 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:43:11.103478 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:43:11.103478 waagent[1897]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:43:11.106703 waagent[1897]: 2024-06-25T18:43:11.106643Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 18:43:11.106703 waagent[1897]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:11.106703 waagent[1897]: pkts bytes target prot opt in out source destination Jun 25 18:43:11.106703 waagent[1897]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:11.106703 waagent[1897]: pkts bytes target prot opt in out source destination Jun 25 18:43:11.106703 waagent[1897]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:11.106703 waagent[1897]: pkts bytes target prot opt in out source destination Jun 25 18:43:11.106703 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:43:11.106703 waagent[1897]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:43:11.106703 waagent[1897]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:43:11.107086 waagent[1897]: 2024-06-25T18:43:11.106943Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 18:43:15.537850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:43:15.543599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:15.637969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:15.642960 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:16.190454 kubelet[2127]: E0625 18:43:16.190323 2127 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:16.194610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:16.194830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:26.255700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:43:26.262569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:26.392303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:26.396812 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:26.779274 chronyd[1691]: Selected source PHC0 Jun 25 18:43:26.949130 kubelet[2143]: E0625 18:43:26.949066 2143 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:26.951826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:26.952025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:33.500720 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:43:33.505646 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:41018.service - OpenSSH per-connection server daemon (10.200.16.10:41018). Jun 25 18:43:34.213220 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 41018 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:34.214952 sshd[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:34.219624 systemd-logind[1695]: New session 3 of user core. Jun 25 18:43:34.225516 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:43:34.790669 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:41880.service - OpenSSH per-connection server daemon (10.200.16.10:41880). Jun 25 18:43:35.471578 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 41880 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:35.473300 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:35.478918 systemd-logind[1695]: New session 4 of user core. Jun 25 18:43:35.484509 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:43:35.933388 sshd[2157]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:35.937844 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:41880.service: Deactivated successfully. Jun 25 18:43:35.939971 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:43:35.940726 systemd-logind[1695]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:43:35.941646 systemd-logind[1695]: Removed session 4. Jun 25 18:43:36.046608 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:41894.service - OpenSSH per-connection server daemon (10.200.16.10:41894). Jun 25 18:43:36.698496 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 41894 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:36.700167 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:36.705110 systemd-logind[1695]: New session 5 of user core. Jun 25 18:43:36.711493 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:43:37.005668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:43:37.011595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:37.108775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:37.113584 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:37.153082 sshd[2164]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:37.155853 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:41894.service: Deactivated successfully. Jun 25 18:43:38.663492 kubelet[2176]: E0625 18:43:37.644688 2176 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:37.157628 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:43:38.663435 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:38.664188 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 41902 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:37.159132 systemd-logind[1695]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:43:37.160049 systemd-logind[1695]: Removed session 5. Jun 25 18:43:37.273056 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:41902.service - OpenSSH per-connection server daemon (10.200.16.10:41902). Jun 25 18:43:37.647105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:37.647294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:38.670091 systemd-logind[1695]: New session 6 of user core. Jun 25 18:43:38.679741 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:43:39.046223 sshd[2186]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:39.049633 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:41902.service: Deactivated successfully. Jun 25 18:43:39.051602 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:43:39.053066 systemd-logind[1695]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:43:39.053922 systemd-logind[1695]: Removed session 6. Jun 25 18:43:39.164698 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:41906.service - OpenSSH per-connection server daemon (10.200.16.10:41906). Jun 25 18:43:39.827740 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 41906 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:39.829466 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:39.835018 systemd-logind[1695]: New session 7 of user core. Jun 25 18:43:39.840805 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:43:40.413884 sudo[2200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:43:40.414313 sudo[2200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:40.425601 sudo[2200]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:40.529175 sshd[2194]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:40.532911 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:41906.service: Deactivated successfully. Jun 25 18:43:40.535149 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:43:40.537039 systemd-logind[1695]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:43:40.538171 systemd-logind[1695]: Removed session 7. Jun 25 18:43:40.667704 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:41910.service - OpenSSH per-connection server daemon (10.200.16.10:41910). Jun 25 18:43:41.326627 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 41910 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:41.328426 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:41.333526 systemd-logind[1695]: New session 8 of user core. Jun 25 18:43:41.342500 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:43:41.682836 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:43:41.683452 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:41.686603 sudo[2209]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:41.691326 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:43:41.691654 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:41.703675 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:41.706009 auditctl[2212]: No rules Jun 25 18:43:41.707175 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:43:41.707431 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:41.709404 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:41.735191 augenrules[2230]: No rules Jun 25 18:43:41.736622 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:41.737751 sudo[2208]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:41.841847 sshd[2205]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:41.846561 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:41910.service: Deactivated successfully. Jun 25 18:43:41.848727 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:43:41.849599 systemd-logind[1695]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:43:41.850628 systemd-logind[1695]: Removed session 8. Jun 25 18:43:41.966918 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:41912.service - OpenSSH per-connection server daemon (10.200.16.10:41912). Jun 25 18:43:42.614230 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 41912 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:42.615960 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:42.620517 systemd-logind[1695]: New session 9 of user core. Jun 25 18:43:42.629499 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:43:42.972872 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:43:42.973259 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:43.625683 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:43:43.625766 (dockerd)[2251]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:43:44.658936 dockerd[2251]: time="2024-06-25T18:43:44.658874556Z" level=info msg="Starting up" Jun 25 18:43:44.755525 dockerd[2251]: time="2024-06-25T18:43:44.755472649Z" level=info msg="Loading containers: start." Jun 25 18:43:44.941552 kernel: Initializing XFRM netlink socket Jun 25 18:43:45.060909 systemd-networkd[1509]: docker0: Link UP Jun 25 18:43:45.094712 dockerd[2251]: time="2024-06-25T18:43:45.094675574Z" level=info msg="Loading containers: done." Jun 25 18:43:45.413284 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2849120736-merged.mount: Deactivated successfully. Jun 25 18:43:45.420446 dockerd[2251]: time="2024-06-25T18:43:45.420406087Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:43:45.420649 dockerd[2251]: time="2024-06-25T18:43:45.420619387Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:43:45.420756 dockerd[2251]: time="2024-06-25T18:43:45.420732787Z" level=info msg="Daemon has completed initialization" Jun 25 18:43:45.475153 dockerd[2251]: time="2024-06-25T18:43:45.474986639Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:43:45.475560 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:43:45.702377 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 25 18:43:47.133919 containerd[1720]: time="2024-06-25T18:43:47.133876230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:43:47.757577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 18:43:47.762567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:47.795884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356711831.mount: Deactivated successfully. Jun 25 18:43:47.898742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:47.903306 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:48.398573 kubelet[2394]: E0625 18:43:48.398513 2394 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:48.401099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:48.401323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:48.510435 update_engine[1698]: I0625 18:43:48.510385 1698 update_attempter.cc:509] Updating boot flags... Jun 25 18:43:48.574395 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2414) Jun 25 18:43:50.289271 containerd[1720]: time="2024-06-25T18:43:50.289210556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:50.291594 containerd[1720]: time="2024-06-25T18:43:50.291528258Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605186" Jun 25 18:43:50.294552 containerd[1720]: time="2024-06-25T18:43:50.294496461Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:50.298987 containerd[1720]: time="2024-06-25T18:43:50.298935866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:50.300811 containerd[1720]: time="2024-06-25T18:43:50.299955067Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.166035437s" Jun 25 18:43:50.300811 containerd[1720]: time="2024-06-25T18:43:50.299998167Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 18:43:50.320762 containerd[1720]: time="2024-06-25T18:43:50.320723486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:43:52.087482 containerd[1720]: time="2024-06-25T18:43:52.087425246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:52.089506 containerd[1720]: time="2024-06-25T18:43:52.089451854Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719499" Jun 25 18:43:52.092792 containerd[1720]: time="2024-06-25T18:43:52.092738967Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:52.100635 containerd[1720]: time="2024-06-25T18:43:52.100584298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:52.101596 containerd[1720]: time="2024-06-25T18:43:52.101563502Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 1.780797415s" Jun 25 18:43:52.101877 containerd[1720]: time="2024-06-25T18:43:52.101699802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 18:43:52.123241 containerd[1720]: time="2024-06-25T18:43:52.123193387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:43:53.460620 containerd[1720]: time="2024-06-25T18:43:53.460570138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:53.462363 containerd[1720]: time="2024-06-25T18:43:53.462296445Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925513" Jun 25 18:43:53.467743 containerd[1720]: time="2024-06-25T18:43:53.467690266Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:53.472326 containerd[1720]: time="2024-06-25T18:43:53.472257484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:53.473421 containerd[1720]: time="2024-06-25T18:43:53.473264488Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.350023001s" Jun 25 18:43:53.473421 containerd[1720]: time="2024-06-25T18:43:53.473303288Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 18:43:53.494551 containerd[1720]: time="2024-06-25T18:43:53.494505071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:43:54.680126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607464296.mount: Deactivated successfully. Jun 25 18:43:55.150981 containerd[1720]: time="2024-06-25T18:43:55.150923475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.152734 containerd[1720]: time="2024-06-25T18:43:55.152672582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118427" Jun 25 18:43:55.155475 containerd[1720]: time="2024-06-25T18:43:55.155423793Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.159447 containerd[1720]: time="2024-06-25T18:43:55.159411908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.160234 containerd[1720]: time="2024-06-25T18:43:55.160045011Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.66549514s" Jun 25 18:43:55.160234 containerd[1720]: time="2024-06-25T18:43:55.160083711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 18:43:55.181242 containerd[1720]: time="2024-06-25T18:43:55.181206094Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:43:55.677005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865944380.mount: Deactivated successfully. Jun 25 18:43:55.698078 containerd[1720]: time="2024-06-25T18:43:55.698025523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.700841 containerd[1720]: time="2024-06-25T18:43:55.700769434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 18:43:55.704317 containerd[1720]: time="2024-06-25T18:43:55.704235348Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.709528 containerd[1720]: time="2024-06-25T18:43:55.709476768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.710343 containerd[1720]: time="2024-06-25T18:43:55.710203471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 528.944577ms" Jun 25 18:43:55.710343 containerd[1720]: time="2024-06-25T18:43:55.710239571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:43:55.731316 containerd[1720]: time="2024-06-25T18:43:55.731283954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:43:56.325022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818468146.mount: Deactivated successfully. Jun 25 18:43:58.503320 containerd[1720]: time="2024-06-25T18:43:58.503254738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:58.505462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 18:43:58.511907 containerd[1720]: time="2024-06-25T18:43:58.511839272Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jun 25 18:43:58.513228 containerd[1720]: time="2024-06-25T18:43:58.513181077Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:58.514704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:58.520544 containerd[1720]: time="2024-06-25T18:43:58.520512206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:58.522594 containerd[1720]: time="2024-06-25T18:43:58.522399213Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.791073159s" Jun 25 18:43:58.522594 containerd[1720]: time="2024-06-25T18:43:58.522441613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:43:58.556647 containerd[1720]: time="2024-06-25T18:43:58.556599847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:43:58.621741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:58.626257 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:58.670014 kubelet[2590]: E0625 18:43:58.669906 2590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:58.672626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:58.672850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:59.630840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150680761.mount: Deactivated successfully. Jun 25 18:44:00.192214 containerd[1720]: time="2024-06-25T18:44:00.192157825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:00.195006 containerd[1720]: time="2024-06-25T18:44:00.194943750Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Jun 25 18:44:00.200467 containerd[1720]: time="2024-06-25T18:44:00.199825895Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:00.208502 containerd[1720]: time="2024-06-25T18:44:00.208464674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:00.209897 containerd[1720]: time="2024-06-25T18:44:00.209261882Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.652536734s" Jun 25 18:44:00.209897 containerd[1720]: time="2024-06-25T18:44:00.209299782Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 18:44:03.210455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:03.216653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:03.243458 systemd[1]: Reloading requested from client PID 2666 ('systemctl') (unit session-9.scope)... Jun 25 18:44:03.243649 systemd[1]: Reloading... Jun 25 18:44:03.371383 zram_generator::config[2706]: No configuration found. Jun 25 18:44:03.484662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:03.563745 systemd[1]: Reloading finished in 319 ms. Jun 25 18:44:03.611531 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:44:03.611636 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:44:03.611928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:03.616683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:04.133624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:04.142944 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:04.187375 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:04.187375 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:04.187375 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:04.187375 kubelet[2774]: I0625 18:44:04.186013 2774 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:04.527454 kubelet[2774]: I0625 18:44:04.527419 2774 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:44:04.527454 kubelet[2774]: I0625 18:44:04.527448 2774 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:04.527731 kubelet[2774]: I0625 18:44:04.527708 2774 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:44:04.738018 kubelet[2774]: I0625 18:44:04.737867 2774 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:04.738385 kubelet[2774]: E0625 18:44:04.738339 2774 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.748727 kubelet[2774]: I0625 18:44:04.748690 2774 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:04.750110 kubelet[2774]: I0625 18:44:04.750076 2774 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:04.750314 kubelet[2774]: I0625 18:44:04.750285 2774 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:04.750786 kubelet[2774]: I0625 18:44:04.750758 2774 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:04.750786 kubelet[2774]: I0625 18:44:04.750787 2774 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:04.751646 kubelet[2774]: I0625 18:44:04.751623 2774 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:04.754025 kubelet[2774]: I0625 18:44:04.754003 2774 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:44:04.754025 kubelet[2774]: I0625 18:44:04.754029 2774 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:04.754292 kubelet[2774]: I0625 18:44:04.754063 2774 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:44:04.754292 kubelet[2774]: I0625 18:44:04.754076 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:04.759374 kubelet[2774]: W0625 18:44:04.758159 2774 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.759374 kubelet[2774]: E0625 18:44:04.758239 2774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.759374 kubelet[2774]: W0625 18:44:04.758988 2774 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-7f29c71dfa&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.759374 kubelet[2774]: E0625 18:44:04.759036 2774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-7f29c71dfa&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.760233 kubelet[2774]: I0625 18:44:04.760179 2774 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:04.772360 kubelet[2774]: W0625 18:44:04.772321 2774 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:44:04.776448 kubelet[2774]: I0625 18:44:04.776425 2774 server.go:1232] "Started kubelet" Jun 25 18:44:04.776824 kubelet[2774]: I0625 18:44:04.776796 2774 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:04.778769 kubelet[2774]: I0625 18:44:04.777744 2774 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:44:04.780588 kubelet[2774]: I0625 18:44:04.779821 2774 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:44:04.780588 kubelet[2774]: I0625 18:44:04.780074 2774 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:04.780588 kubelet[2774]: E0625 18:44:04.780263 2774 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.0.0-a-7f29c71dfa.17dc53921da56d0b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.0.0-a-7f29c71dfa", UID:"ci-4012.0.0-a-7f29c71dfa", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.0.0-a-7f29c71dfa"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 44, 4, 776398091, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 44, 4, 776398091, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.0.0-a-7f29c71dfa"}': 'Post "https://10.200.8.39:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.39:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:44:04.780588 kubelet[2774]: I0625 18:44:04.780418 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:04.781623 kubelet[2774]: E0625 18:44:04.781606 2774 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:44:04.781739 kubelet[2774]: E0625 18:44:04.781727 2774 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:04.783923 kubelet[2774]: E0625 18:44:04.783905 2774 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-7f29c71dfa\" not found" Jun 25 18:44:04.784380 kubelet[2774]: I0625 18:44:04.784031 2774 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:04.784380 kubelet[2774]: I0625 18:44:04.784117 2774 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:04.784380 kubelet[2774]: I0625 18:44:04.784176 2774 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:04.784741 kubelet[2774]: W0625 18:44:04.784696 2774 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.784850 kubelet[2774]: E0625 18:44:04.784839 2774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.785615 kubelet[2774]: E0625 18:44:04.785598 2774 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-7f29c71dfa?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Jun 25 18:44:04.847340 kubelet[2774]: I0625 18:44:04.847312 2774 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:04.847522 kubelet[2774]: I0625 18:44:04.847425 2774 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:04.847522 kubelet[2774]: I0625 18:44:04.847450 2774 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:04.856075 kubelet[2774]: I0625 18:44:04.856047 2774 policy_none.go:49] "None policy: Start" Jun 25 18:44:04.856667 kubelet[2774]: I0625 18:44:04.856649 2774 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:44:04.856809 kubelet[2774]: I0625 18:44:04.856715 2774 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:04.864113 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:44:04.874002 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:44:04.879083 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:44:04.882767 kubelet[2774]: I0625 18:44:04.882747 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:04.885189 kubelet[2774]: I0625 18:44:04.885168 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:04.885189 kubelet[2774]: I0625 18:44:04.885191 2774 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:04.885305 kubelet[2774]: I0625 18:44:04.885211 2774 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:44:04.885305 kubelet[2774]: E0625 18:44:04.885260 2774 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:04.887562 kubelet[2774]: I0625 18:44:04.887438 2774 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:04.887732 kubelet[2774]: I0625 18:44:04.887708 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:04.890276 kubelet[2774]: E0625 18:44:04.890250 2774 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-a-7f29c71dfa\" not found" Jun 25 18:44:04.890635 kubelet[2774]: W0625 18:44:04.890566 2774 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.890635 kubelet[2774]: E0625 18:44:04.890628 2774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:04.893468 kubelet[2774]: I0625 18:44:04.892996 2774 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:04.893468 kubelet[2774]: E0625 18:44:04.893316 2774 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:04.986318 kubelet[2774]: I0625 18:44:04.986275 2774 topology_manager.go:215] "Topology Admit Handler" podUID="eca0508d5e75459d0944c293356ddcb3" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:04.986638 kubelet[2774]: E0625 18:44:04.986608 2774 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-7f29c71dfa?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Jun 25 18:44:04.988702 kubelet[2774]: I0625 18:44:04.988556 2774 topology_manager.go:215] "Topology Admit Handler" podUID="cca5e0e776d363741373c442e3d8197e" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:04.990487 kubelet[2774]: I0625 18:44:04.990133 2774 topology_manager.go:215] "Topology Admit Handler" podUID="906848452a51444a2fdf03daf8691346" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:04.997580 systemd[1]: Created slice kubepods-burstable-podeca0508d5e75459d0944c293356ddcb3.slice - libcontainer container kubepods-burstable-podeca0508d5e75459d0944c293356ddcb3.slice. Jun 25 18:44:05.009479 systemd[1]: Created slice kubepods-burstable-podcca5e0e776d363741373c442e3d8197e.slice - libcontainer container kubepods-burstable-podcca5e0e776d363741373c442e3d8197e.slice. Jun 25 18:44:05.018077 systemd[1]: Created slice kubepods-burstable-pod906848452a51444a2fdf03daf8691346.slice - libcontainer container kubepods-burstable-pod906848452a51444a2fdf03daf8691346.slice. Jun 25 18:44:05.085710 kubelet[2774]: I0625 18:44:05.085575 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eca0508d5e75459d0944c293356ddcb3-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-7f29c71dfa\" (UID: \"eca0508d5e75459d0944c293356ddcb3\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.085710 kubelet[2774]: I0625 18:44:05.085693 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.085922 kubelet[2774]: I0625 18:44:05.085736 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.085922 kubelet[2774]: I0625 18:44:05.085773 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.085922 kubelet[2774]: I0625 18:44:05.085804 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.085922 kubelet[2774]: I0625 18:44:05.085836 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/906848452a51444a2fdf03daf8691346-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-7f29c71dfa\" (UID: \"906848452a51444a2fdf03daf8691346\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.085922 kubelet[2774]: I0625 18:44:05.085866 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eca0508d5e75459d0944c293356ddcb3-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-7f29c71dfa\" (UID: \"eca0508d5e75459d0944c293356ddcb3\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.086172 kubelet[2774]: I0625 18:44:05.085904 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eca0508d5e75459d0944c293356ddcb3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-7f29c71dfa\" (UID: \"eca0508d5e75459d0944c293356ddcb3\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.086172 kubelet[2774]: I0625 18:44:05.085938 2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.096389 kubelet[2774]: I0625 18:44:05.096342 2774 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.096737 kubelet[2774]: E0625 18:44:05.096716 2774 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.309202 containerd[1720]: time="2024-06-25T18:44:05.309142341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-7f29c71dfa,Uid:eca0508d5e75459d0944c293356ddcb3,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:05.313677 containerd[1720]: time="2024-06-25T18:44:05.313642951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-7f29c71dfa,Uid:cca5e0e776d363741373c442e3d8197e,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:05.323985 containerd[1720]: time="2024-06-25T18:44:05.323822073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-7f29c71dfa,Uid:906848452a51444a2fdf03daf8691346,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:05.388036 kubelet[2774]: E0625 18:44:05.387935 2774 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-7f29c71dfa?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Jun 25 18:44:05.499340 kubelet[2774]: I0625 18:44:05.499285 2774 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.499691 kubelet[2774]: E0625 18:44:05.499669 2774 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:05.706485 kubelet[2774]: W0625 18:44:05.706309 2774 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-7f29c71dfa&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:05.706485 kubelet[2774]: E0625 18:44:05.706422 2774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-7f29c71dfa&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:05.750923 kubelet[2774]: W0625 18:44:05.750860 2774 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:05.751054 kubelet[2774]: E0625 18:44:05.750941 2774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:05.761590 kubelet[2774]: W0625 18:44:05.761530 2774 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:05.761690 kubelet[2774]: E0625 18:44:05.761599 2774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:05.904136 kubelet[2774]: W0625 18:44:05.904076 2774 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:05.904346 kubelet[2774]: E0625 18:44:05.904327 2774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:05.910495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount427128226.mount: Deactivated successfully. Jun 25 18:44:05.951400 containerd[1720]: time="2024-06-25T18:44:05.951331128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:05.954224 containerd[1720]: time="2024-06-25T18:44:05.954167434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 18:44:05.959623 containerd[1720]: time="2024-06-25T18:44:05.959523745Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:05.964301 containerd[1720]: time="2024-06-25T18:44:05.964264656Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:05.967578 containerd[1720]: time="2024-06-25T18:44:05.967498163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:05.971228 containerd[1720]: time="2024-06-25T18:44:05.971191571Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:05.974264 containerd[1720]: time="2024-06-25T18:44:05.974227077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:05.983015 containerd[1720]: time="2024-06-25T18:44:05.982875596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:05.984640 containerd[1720]: time="2024-06-25T18:44:05.984053498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 670.324347ms" Jun 25 18:44:05.985687 containerd[1720]: time="2024-06-25T18:44:05.985651802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.385661ms" Jun 25 18:44:05.988775 containerd[1720]: time="2024-06-25T18:44:05.988741409Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.835736ms" Jun 25 18:44:06.189370 kubelet[2774]: E0625 18:44:06.189323 2774 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-7f29c71dfa?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s" Jun 25 18:44:06.303112 kubelet[2774]: I0625 18:44:06.302669 2774 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:06.303112 kubelet[2774]: E0625 18:44:06.303018 2774 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:06.680579 containerd[1720]: time="2024-06-25T18:44:06.680364802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:06.681697 containerd[1720]: time="2024-06-25T18:44:06.681539304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:06.681907 containerd[1720]: time="2024-06-25T18:44:06.681832805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:06.682044 containerd[1720]: time="2024-06-25T18:44:06.681851305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:06.682044 containerd[1720]: time="2024-06-25T18:44:06.681953905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:06.682044 containerd[1720]: time="2024-06-25T18:44:06.681983705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:06.682044 containerd[1720]: time="2024-06-25T18:44:06.682021105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:06.682440 containerd[1720]: time="2024-06-25T18:44:06.682282406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:06.683667 containerd[1720]: time="2024-06-25T18:44:06.683484009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:06.683667 containerd[1720]: time="2024-06-25T18:44:06.683545009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:06.683667 containerd[1720]: time="2024-06-25T18:44:06.683572109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:06.683667 containerd[1720]: time="2024-06-25T18:44:06.683591409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:06.717635 systemd[1]: Started cri-containerd-4f813945b56fe06cf3a002c1a26967eb2179604e9787c227a747472a489850eb.scope - libcontainer container 4f813945b56fe06cf3a002c1a26967eb2179604e9787c227a747472a489850eb. Jun 25 18:44:06.732552 systemd[1]: Started cri-containerd-4045b17d20cda6f012c9a36c22daf3dd1b128e7d7405acc5e54a14c66dfb3c7c.scope - libcontainer container 4045b17d20cda6f012c9a36c22daf3dd1b128e7d7405acc5e54a14c66dfb3c7c. Jun 25 18:44:06.734043 systemd[1]: Started cri-containerd-bfa62a732a3938a632b9e074ca80f39342ad09290950f5092dc652ff17568f9f.scope - libcontainer container bfa62a732a3938a632b9e074ca80f39342ad09290950f5092dc652ff17568f9f. Jun 25 18:44:06.811948 containerd[1720]: time="2024-06-25T18:44:06.810577783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-7f29c71dfa,Uid:eca0508d5e75459d0944c293356ddcb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4045b17d20cda6f012c9a36c22daf3dd1b128e7d7405acc5e54a14c66dfb3c7c\"" Jun 25 18:44:06.812803 containerd[1720]: time="2024-06-25T18:44:06.812511087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-7f29c71dfa,Uid:cca5e0e776d363741373c442e3d8197e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f813945b56fe06cf3a002c1a26967eb2179604e9787c227a747472a489850eb\"" Jun 25 18:44:06.820456 containerd[1720]: time="2024-06-25T18:44:06.819301502Z" level=info msg="CreateContainer within sandbox \"4f813945b56fe06cf3a002c1a26967eb2179604e9787c227a747472a489850eb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:44:06.820456 containerd[1720]: time="2024-06-25T18:44:06.819669803Z" level=info msg="CreateContainer within sandbox \"4045b17d20cda6f012c9a36c22daf3dd1b128e7d7405acc5e54a14c66dfb3c7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:44:06.821296 containerd[1720]: time="2024-06-25T18:44:06.821263606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-7f29c71dfa,Uid:906848452a51444a2fdf03daf8691346,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa62a732a3938a632b9e074ca80f39342ad09290950f5092dc652ff17568f9f\"" Jun 25 18:44:06.824225 containerd[1720]: time="2024-06-25T18:44:06.824195312Z" level=info msg="CreateContainer within sandbox \"bfa62a732a3938a632b9e074ca80f39342ad09290950f5092dc652ff17568f9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:44:06.856541 kubelet[2774]: E0625 18:44:06.856507 2774 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Jun 25 18:44:06.892168 containerd[1720]: time="2024-06-25T18:44:06.892121778Z" level=info msg="CreateContainer within sandbox \"4045b17d20cda6f012c9a36c22daf3dd1b128e7d7405acc5e54a14c66dfb3c7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1fe9fb7dc23d616333e8fb5c9f13c5ad2239794258cab3e28d9a27409e84be7e\"" Jun 25 18:44:06.903417 containerd[1720]: time="2024-06-25T18:44:06.899520497Z" level=info msg="CreateContainer within sandbox \"4f813945b56fe06cf3a002c1a26967eb2179604e9787c227a747472a489850eb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e\"" Jun 25 18:44:06.903417 containerd[1720]: time="2024-06-25T18:44:06.899896398Z" level=info msg="StartContainer for \"1fe9fb7dc23d616333e8fb5c9f13c5ad2239794258cab3e28d9a27409e84be7e\"" Jun 25 18:44:06.908850 containerd[1720]: time="2024-06-25T18:44:06.908757220Z" level=info msg="StartContainer for \"91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e\"" Jun 25 18:44:06.911249 containerd[1720]: time="2024-06-25T18:44:06.911214126Z" level=info msg="CreateContainer within sandbox \"bfa62a732a3938a632b9e074ca80f39342ad09290950f5092dc652ff17568f9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013\"" Jun 25 18:44:06.911674 containerd[1720]: time="2024-06-25T18:44:06.911647727Z" level=info msg="StartContainer for \"ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013\"" Jun 25 18:44:06.965523 systemd[1]: Started cri-containerd-1fe9fb7dc23d616333e8fb5c9f13c5ad2239794258cab3e28d9a27409e84be7e.scope - libcontainer container 1fe9fb7dc23d616333e8fb5c9f13c5ad2239794258cab3e28d9a27409e84be7e. Jun 25 18:44:06.980878 systemd[1]: Started cri-containerd-91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e.scope - libcontainer container 91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e. Jun 25 18:44:06.991509 systemd[1]: Started cri-containerd-ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013.scope - libcontainer container ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013. Jun 25 18:44:07.050486 containerd[1720]: time="2024-06-25T18:44:07.050441277Z" level=info msg="StartContainer for \"1fe9fb7dc23d616333e8fb5c9f13c5ad2239794258cab3e28d9a27409e84be7e\" returns successfully" Jun 25 18:44:07.073387 containerd[1720]: time="2024-06-25T18:44:07.073251334Z" level=info msg="StartContainer for \"91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e\" returns successfully" Jun 25 18:44:07.092616 containerd[1720]: time="2024-06-25T18:44:07.092567983Z" level=info msg="StartContainer for \"ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013\" returns successfully" Jun 25 18:44:07.905131 kubelet[2774]: I0625 18:44:07.905096 2774 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:10.016396 kubelet[2774]: E0625 18:44:10.016334 2774 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.0.0-a-7f29c71dfa\" not found" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:10.920896 kubelet[2774]: I0625 18:44:10.920854 2774 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:11.916894 kubelet[2774]: I0625 18:44:11.916835 2774 apiserver.go:52] "Watching apiserver" Jun 25 18:44:11.985057 kubelet[2774]: I0625 18:44:11.985030 2774 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:12.728852 kubelet[2774]: W0625 18:44:12.728739 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:12.814247 systemd[1]: Reloading requested from client PID 3047 ('systemctl') (unit session-9.scope)... Jun 25 18:44:12.814264 systemd[1]: Reloading... Jun 25 18:44:12.909385 zram_generator::config[3084]: No configuration found. Jun 25 18:44:13.037656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:13.132606 systemd[1]: Reloading finished in 317 ms. Jun 25 18:44:13.173754 kubelet[2774]: I0625 18:44:13.173686 2774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:13.173802 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:13.182201 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:13.182466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:13.188004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:13.291308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:13.301757 (kubelet)[3151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:13.798378 kubelet[3151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:13.798378 kubelet[3151]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:13.798378 kubelet[3151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:13.798378 kubelet[3151]: I0625 18:44:13.798107 3151 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:13.805947 kubelet[3151]: I0625 18:44:13.805779 3151 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:44:13.805947 kubelet[3151]: I0625 18:44:13.805809 3151 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:13.806509 kubelet[3151]: I0625 18:44:13.806482 3151 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:44:13.808563 kubelet[3151]: I0625 18:44:13.808530 3151 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:44:13.810333 kubelet[3151]: I0625 18:44:13.810123 3151 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:13.826308 kubelet[3151]: I0625 18:44:13.826276 3151 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:13.826678 kubelet[3151]: I0625 18:44:13.826660 3151 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:13.827490 kubelet[3151]: I0625 18:44:13.827126 3151 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:13.827490 kubelet[3151]: I0625 18:44:13.827171 3151 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:13.827490 kubelet[3151]: I0625 18:44:13.827186 3151 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:13.827490 kubelet[3151]: I0625 18:44:13.827242 3151 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:13.827490 kubelet[3151]: I0625 18:44:13.827406 3151 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:44:13.827490 kubelet[3151]: I0625 18:44:13.827437 3151 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:13.828426 kubelet[3151]: I0625 18:44:13.828312 3151 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:44:13.828539 kubelet[3151]: I0625 18:44:13.828495 3151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:13.830904 kubelet[3151]: I0625 18:44:13.830767 3151 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:13.832555 kubelet[3151]: I0625 18:44:13.832466 3151 server.go:1232] "Started kubelet" Jun 25 18:44:13.837314 kubelet[3151]: I0625 18:44:13.837298 3151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:13.841144 kubelet[3151]: E0625 18:44:13.841124 3151 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:44:13.841302 kubelet[3151]: E0625 18:44:13.841288 3151 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:13.847320 kubelet[3151]: I0625 18:44:13.847299 3151 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:13.848264 kubelet[3151]: I0625 18:44:13.848246 3151 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:44:13.856553 kubelet[3151]: I0625 18:44:13.856533 3151 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:44:13.856752 kubelet[3151]: I0625 18:44:13.856732 3151 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:13.863104 kubelet[3151]: I0625 18:44:13.861711 3151 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:13.863316 kubelet[3151]: I0625 18:44:13.863300 3151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:13.864693 kubelet[3151]: I0625 18:44:13.864676 3151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:13.864910 kubelet[3151]: I0625 18:44:13.864897 3151 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:13.865032 kubelet[3151]: I0625 18:44:13.865008 3151 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:44:13.865163 kubelet[3151]: E0625 18:44:13.865151 3151 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:13.867407 kubelet[3151]: I0625 18:44:13.866442 3151 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:13.867407 kubelet[3151]: I0625 18:44:13.866599 3151 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:13.952078 kubelet[3151]: I0625 18:44:13.952047 3151 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:13.952078 kubelet[3151]: I0625 18:44:13.952076 3151 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:13.952278 kubelet[3151]: I0625 18:44:13.952095 3151 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:13.952278 kubelet[3151]: I0625 18:44:13.952266 3151 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:44:13.952383 kubelet[3151]: I0625 18:44:13.952291 3151 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:44:13.952383 kubelet[3151]: I0625 18:44:13.952300 3151 policy_none.go:49] "None policy: Start" Jun 25 18:44:13.953273 kubelet[3151]: I0625 18:44:13.953249 3151 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:44:13.953423 kubelet[3151]: I0625 18:44:13.953288 3151 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:13.953517 kubelet[3151]: I0625 18:44:13.953483 3151 state_mem.go:75] "Updated machine memory state" Jun 25 18:44:13.959283 kubelet[3151]: I0625 18:44:13.958896 3151 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:13.959283 kubelet[3151]: I0625 18:44:13.959120 3151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:13.965948 kubelet[3151]: I0625 18:44:13.965589 3151 topology_manager.go:215] "Topology Admit Handler" podUID="cca5e0e776d363741373c442e3d8197e" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.965948 kubelet[3151]: I0625 18:44:13.965822 3151 topology_manager.go:215] "Topology Admit Handler" podUID="906848452a51444a2fdf03daf8691346" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.965948 kubelet[3151]: I0625 18:44:13.965907 3151 topology_manager.go:215] "Topology Admit Handler" podUID="eca0508d5e75459d0944c293356ddcb3" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.966886 kubelet[3151]: I0625 18:44:13.966859 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.967955 kubelet[3151]: I0625 18:44:13.967480 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.967955 kubelet[3151]: I0625 18:44:13.967537 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.967955 kubelet[3151]: I0625 18:44:13.967569 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/906848452a51444a2fdf03daf8691346-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-7f29c71dfa\" (UID: \"906848452a51444a2fdf03daf8691346\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.970370 kubelet[3151]: I0625 18:44:13.968391 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.970370 kubelet[3151]: I0625 18:44:13.968490 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cca5e0e776d363741373c442e3d8197e-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" (UID: \"cca5e0e776d363741373c442e3d8197e\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.970370 kubelet[3151]: I0625 18:44:13.968525 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eca0508d5e75459d0944c293356ddcb3-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-7f29c71dfa\" (UID: \"eca0508d5e75459d0944c293356ddcb3\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.970370 kubelet[3151]: I0625 18:44:13.969005 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eca0508d5e75459d0944c293356ddcb3-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-7f29c71dfa\" (UID: \"eca0508d5e75459d0944c293356ddcb3\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.970370 kubelet[3151]: I0625 18:44:13.969214 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eca0508d5e75459d0944c293356ddcb3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-7f29c71dfa\" (UID: \"eca0508d5e75459d0944c293356ddcb3\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.979154 kubelet[3151]: I0625 18:44:13.979125 3151 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.988230 kubelet[3151]: W0625 18:44:13.987385 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:13.990011 kubelet[3151]: W0625 18:44:13.989890 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:13.990711 kubelet[3151]: W0625 18:44:13.990194 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:13.990711 kubelet[3151]: E0625 18:44:13.990292 3151 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.0.0-a-7f29c71dfa\" already exists" pod="kube-system/kube-apiserver-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:13.999833 kubelet[3151]: I0625 18:44:13.999768 3151 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:14.000396 kubelet[3151]: I0625 18:44:14.000067 3151 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:14.831043 kubelet[3151]: I0625 18:44:14.830687 3151 apiserver.go:52] "Watching apiserver" Jun 25 18:44:14.867326 kubelet[3151]: I0625 18:44:14.867269 3151 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:14.932763 kubelet[3151]: W0625 18:44:14.932729 3151 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:14.932930 kubelet[3151]: E0625 18:44:14.932806 3151 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012.0.0-a-7f29c71dfa\" already exists" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" Jun 25 18:44:14.944867 kubelet[3151]: I0625 18:44:14.944824 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" podStartSLOduration=1.9447623379999999 podCreationTimestamp="2024-06-25 18:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:14.943804336 +0000 UTC m=+1.638322913" watchObservedRunningTime="2024-06-25 18:44:14.944762338 +0000 UTC m=+1.639281015" Jun 25 18:44:14.960300 kubelet[3151]: I0625 18:44:14.959927 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-a-7f29c71dfa" podStartSLOduration=2.959882474 podCreationTimestamp="2024-06-25 18:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:14.951993756 +0000 UTC m=+1.646512333" watchObservedRunningTime="2024-06-25 18:44:14.959882474 +0000 UTC m=+1.654401151" Jun 25 18:44:14.968467 kubelet[3151]: I0625 18:44:14.968267 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-a-7f29c71dfa" podStartSLOduration=1.9682265939999999 podCreationTimestamp="2024-06-25 18:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:14.960451376 +0000 UTC m=+1.654970053" watchObservedRunningTime="2024-06-25 18:44:14.968226594 +0000 UTC m=+1.662745171" Jun 25 18:44:18.275684 sudo[2241]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:18.380716 sshd[2238]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:18.384595 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:41912.service: Deactivated successfully. Jun 25 18:44:18.387133 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:44:18.387435 systemd[1]: session-9.scope: Consumed 4.849s CPU time, 136.7M memory peak, 0B memory swap peak. Jun 25 18:44:18.388955 systemd-logind[1695]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:44:18.390061 systemd-logind[1695]: Removed session 9. Jun 25 18:44:27.860032 kubelet[3151]: I0625 18:44:27.859916 3151 topology_manager.go:215] "Topology Admit Handler" podUID="8ba76296-b8fd-4215-8ff8-65cfac224f2d" podNamespace="kube-system" podName="kube-proxy-nv5dc" Jun 25 18:44:27.870647 systemd[1]: Created slice kubepods-besteffort-pod8ba76296_b8fd_4215_8ff8_65cfac224f2d.slice - libcontainer container kubepods-besteffort-pod8ba76296_b8fd_4215_8ff8_65cfac224f2d.slice. Jun 25 18:44:27.962393 kubelet[3151]: I0625 18:44:27.962313 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ba76296-b8fd-4215-8ff8-65cfac224f2d-xtables-lock\") pod \"kube-proxy-nv5dc\" (UID: \"8ba76296-b8fd-4215-8ff8-65cfac224f2d\") " pod="kube-system/kube-proxy-nv5dc" Jun 25 18:44:27.962393 kubelet[3151]: I0625 18:44:27.962380 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ba76296-b8fd-4215-8ff8-65cfac224f2d-lib-modules\") pod \"kube-proxy-nv5dc\" (UID: \"8ba76296-b8fd-4215-8ff8-65cfac224f2d\") " pod="kube-system/kube-proxy-nv5dc" Jun 25 18:44:27.962393 kubelet[3151]: I0625 18:44:27.962410 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ba76296-b8fd-4215-8ff8-65cfac224f2d-kube-proxy\") pod \"kube-proxy-nv5dc\" (UID: \"8ba76296-b8fd-4215-8ff8-65cfac224f2d\") " pod="kube-system/kube-proxy-nv5dc" Jun 25 18:44:27.962659 kubelet[3151]: I0625 18:44:27.962438 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjcwq\" (UniqueName: \"kubernetes.io/projected/8ba76296-b8fd-4215-8ff8-65cfac224f2d-kube-api-access-bjcwq\") pod \"kube-proxy-nv5dc\" (UID: \"8ba76296-b8fd-4215-8ff8-65cfac224f2d\") " pod="kube-system/kube-proxy-nv5dc" Jun 25 18:44:27.988397 kubelet[3151]: I0625 18:44:27.988365 3151 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:44:27.988851 containerd[1720]: time="2024-06-25T18:44:27.988797292Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:44:27.989411 kubelet[3151]: I0625 18:44:27.989050 3151 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:44:28.002860 kubelet[3151]: I0625 18:44:28.001849 3151 topology_manager.go:215] "Topology Admit Handler" podUID="781ae907-1977-45fb-8800-6158e586210b" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-6c594" Jun 25 18:44:28.012738 systemd[1]: Created slice kubepods-besteffort-pod781ae907_1977_45fb_8800_6158e586210b.slice - libcontainer container kubepods-besteffort-pod781ae907_1977_45fb_8800_6158e586210b.slice. Jun 25 18:44:28.063279 kubelet[3151]: I0625 18:44:28.063139 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/781ae907-1977-45fb-8800-6158e586210b-var-lib-calico\") pod \"tigera-operator-76c4974c85-6c594\" (UID: \"781ae907-1977-45fb-8800-6158e586210b\") " pod="tigera-operator/tigera-operator-76c4974c85-6c594" Jun 25 18:44:28.063279 kubelet[3151]: I0625 18:44:28.063211 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kh9k\" (UniqueName: \"kubernetes.io/projected/781ae907-1977-45fb-8800-6158e586210b-kube-api-access-4kh9k\") pod \"tigera-operator-76c4974c85-6c594\" (UID: \"781ae907-1977-45fb-8800-6158e586210b\") " pod="tigera-operator/tigera-operator-76c4974c85-6c594" Jun 25 18:44:28.182314 containerd[1720]: time="2024-06-25T18:44:28.182200092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nv5dc,Uid:8ba76296-b8fd-4215-8ff8-65cfac224f2d,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:28.225549 containerd[1720]: time="2024-06-25T18:44:28.225101803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:28.225549 containerd[1720]: time="2024-06-25T18:44:28.225166003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:28.225549 containerd[1720]: time="2024-06-25T18:44:28.225188203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:28.225549 containerd[1720]: time="2024-06-25T18:44:28.225204703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:28.248538 systemd[1]: Started cri-containerd-6a37aa576d508a0030f41888f9d430a83edde2b617fb1c1dbdce8df85ba21327.scope - libcontainer container 6a37aa576d508a0030f41888f9d430a83edde2b617fb1c1dbdce8df85ba21327. Jun 25 18:44:28.270470 containerd[1720]: time="2024-06-25T18:44:28.270219220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nv5dc,Uid:8ba76296-b8fd-4215-8ff8-65cfac224f2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a37aa576d508a0030f41888f9d430a83edde2b617fb1c1dbdce8df85ba21327\"" Jun 25 18:44:28.273922 containerd[1720]: time="2024-06-25T18:44:28.273880929Z" level=info msg="CreateContainer within sandbox \"6a37aa576d508a0030f41888f9d430a83edde2b617fb1c1dbdce8df85ba21327\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:44:28.316287 containerd[1720]: time="2024-06-25T18:44:28.316242938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-6c594,Uid:781ae907-1977-45fb-8800-6158e586210b,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:44:28.343189 containerd[1720]: time="2024-06-25T18:44:28.343066608Z" level=info msg="CreateContainer within sandbox \"6a37aa576d508a0030f41888f9d430a83edde2b617fb1c1dbdce8df85ba21327\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"292d1280f80ab904e0fa6eba195d359ee69be186809e1c59f4a8fcb7a451dcd4\"" Jun 25 18:44:28.343886 containerd[1720]: time="2024-06-25T18:44:28.343857510Z" level=info msg="StartContainer for \"292d1280f80ab904e0fa6eba195d359ee69be186809e1c59f4a8fcb7a451dcd4\"" Jun 25 18:44:28.373748 systemd[1]: Started cri-containerd-292d1280f80ab904e0fa6eba195d359ee69be186809e1c59f4a8fcb7a451dcd4.scope - libcontainer container 292d1280f80ab904e0fa6eba195d359ee69be186809e1c59f4a8fcb7a451dcd4. Jun 25 18:44:28.387217 containerd[1720]: time="2024-06-25T18:44:28.386071519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:28.387569 containerd[1720]: time="2024-06-25T18:44:28.387498423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:28.387569 containerd[1720]: time="2024-06-25T18:44:28.387534823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:28.387877 containerd[1720]: time="2024-06-25T18:44:28.387549923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:28.415652 systemd[1]: Started cri-containerd-1254a75df7b306a9028b86764944195b6531d82c40836fb113a930db023935f6.scope - libcontainer container 1254a75df7b306a9028b86764944195b6531d82c40836fb113a930db023935f6. Jun 25 18:44:28.429547 containerd[1720]: time="2024-06-25T18:44:28.429501031Z" level=info msg="StartContainer for \"292d1280f80ab904e0fa6eba195d359ee69be186809e1c59f4a8fcb7a451dcd4\" returns successfully" Jun 25 18:44:28.481090 containerd[1720]: time="2024-06-25T18:44:28.480874664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-6c594,Uid:781ae907-1977-45fb-8800-6158e586210b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1254a75df7b306a9028b86764944195b6531d82c40836fb113a930db023935f6\"" Jun 25 18:44:28.484989 containerd[1720]: time="2024-06-25T18:44:28.484451973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:44:28.951074 kubelet[3151]: I0625 18:44:28.950558 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nv5dc" podStartSLOduration=1.950514177 podCreationTimestamp="2024-06-25 18:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:28.950415377 +0000 UTC m=+15.644933954" watchObservedRunningTime="2024-06-25 18:44:28.950514177 +0000 UTC m=+15.645032754" Jun 25 18:44:30.328423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992499121.mount: Deactivated successfully. Jun 25 18:44:31.079981 containerd[1720]: time="2024-06-25T18:44:31.079930358Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:31.081841 containerd[1720]: time="2024-06-25T18:44:31.081777565Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076060" Jun 25 18:44:31.086634 containerd[1720]: time="2024-06-25T18:44:31.086455083Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:31.091423 containerd[1720]: time="2024-06-25T18:44:31.091367502Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:31.092177 containerd[1720]: time="2024-06-25T18:44:31.092120305Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.607592132s" Jun 25 18:44:31.092177 containerd[1720]: time="2024-06-25T18:44:31.092162605Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:44:31.094411 containerd[1720]: time="2024-06-25T18:44:31.094239113Z" level=info msg="CreateContainer within sandbox \"1254a75df7b306a9028b86764944195b6531d82c40836fb113a930db023935f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:44:31.144718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534368655.mount: Deactivated successfully. Jun 25 18:44:31.151849 containerd[1720]: time="2024-06-25T18:44:31.151806037Z" level=info msg="CreateContainer within sandbox \"1254a75df7b306a9028b86764944195b6531d82c40836fb113a930db023935f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e\"" Jun 25 18:44:31.153446 containerd[1720]: time="2024-06-25T18:44:31.152383240Z" level=info msg="StartContainer for \"a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e\"" Jun 25 18:44:31.184499 systemd[1]: Started cri-containerd-a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e.scope - libcontainer container a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e. Jun 25 18:44:31.215649 containerd[1720]: time="2024-06-25T18:44:31.215584086Z" level=info msg="StartContainer for \"a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e\" returns successfully" Jun 25 18:44:31.958537 kubelet[3151]: I0625 18:44:31.957934 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-6c594" podStartSLOduration=2.34886984 podCreationTimestamp="2024-06-25 18:44:27 +0000 UTC" firstStartedPulling="2024-06-25 18:44:28.48348617 +0000 UTC m=+15.178004747" lastFinishedPulling="2024-06-25 18:44:31.092506106 +0000 UTC m=+17.787024783" observedRunningTime="2024-06-25 18:44:31.957617875 +0000 UTC m=+18.652136452" watchObservedRunningTime="2024-06-25 18:44:31.957889876 +0000 UTC m=+18.652408453" Jun 25 18:44:34.093466 kubelet[3151]: I0625 18:44:34.092708 3151 topology_manager.go:215] "Topology Admit Handler" podUID="ed3e2768-7843-4613-af83-e2d27541363a" podNamespace="calico-system" podName="calico-typha-6ffb859b44-crfhn" Jun 25 18:44:34.104551 systemd[1]: Created slice kubepods-besteffort-poded3e2768_7843_4613_af83_e2d27541363a.slice - libcontainer container kubepods-besteffort-poded3e2768_7843_4613_af83_e2d27541363a.slice. Jun 25 18:44:34.105981 kubelet[3151]: I0625 18:44:34.104840 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3e2768-7843-4613-af83-e2d27541363a-tigera-ca-bundle\") pod \"calico-typha-6ffb859b44-crfhn\" (UID: \"ed3e2768-7843-4613-af83-e2d27541363a\") " pod="calico-system/calico-typha-6ffb859b44-crfhn" Jun 25 18:44:34.105981 kubelet[3151]: I0625 18:44:34.104886 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ed3e2768-7843-4613-af83-e2d27541363a-typha-certs\") pod \"calico-typha-6ffb859b44-crfhn\" (UID: \"ed3e2768-7843-4613-af83-e2d27541363a\") " pod="calico-system/calico-typha-6ffb859b44-crfhn" Jun 25 18:44:34.105981 kubelet[3151]: I0625 18:44:34.104919 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckcrv\" (UniqueName: \"kubernetes.io/projected/ed3e2768-7843-4613-af83-e2d27541363a-kube-api-access-ckcrv\") pod \"calico-typha-6ffb859b44-crfhn\" (UID: \"ed3e2768-7843-4613-af83-e2d27541363a\") " pod="calico-system/calico-typha-6ffb859b44-crfhn" Jun 25 18:44:34.221767 kubelet[3151]: I0625 18:44:34.221730 3151 topology_manager.go:215] "Topology Admit Handler" podUID="943edce0-0637-44b4-b1bf-b76f5610a2bb" podNamespace="calico-system" podName="calico-node-8dvp8" Jun 25 18:44:34.232775 systemd[1]: Created slice kubepods-besteffort-pod943edce0_0637_44b4_b1bf_b76f5610a2bb.slice - libcontainer container kubepods-besteffort-pod943edce0_0637_44b4_b1bf_b76f5610a2bb.slice. Jun 25 18:44:34.305496 kubelet[3151]: I0625 18:44:34.305457 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-var-run-calico\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305679 kubelet[3151]: I0625 18:44:34.305512 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4j9b\" (UniqueName: \"kubernetes.io/projected/943edce0-0637-44b4-b1bf-b76f5610a2bb-kube-api-access-v4j9b\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305679 kubelet[3151]: I0625 18:44:34.305566 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-lib-modules\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305679 kubelet[3151]: I0625 18:44:34.305593 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-var-lib-calico\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305679 kubelet[3151]: I0625 18:44:34.305617 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-cni-bin-dir\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305679 kubelet[3151]: I0625 18:44:34.305645 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943edce0-0637-44b4-b1bf-b76f5610a2bb-tigera-ca-bundle\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305891 kubelet[3151]: I0625 18:44:34.305669 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-cni-net-dir\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305891 kubelet[3151]: I0625 18:44:34.305695 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-cni-log-dir\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305891 kubelet[3151]: I0625 18:44:34.305728 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-flexvol-driver-host\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305891 kubelet[3151]: I0625 18:44:34.305756 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-policysync\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.305891 kubelet[3151]: I0625 18:44:34.305786 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/943edce0-0637-44b4-b1bf-b76f5610a2bb-xtables-lock\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.306086 kubelet[3151]: I0625 18:44:34.305817 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/943edce0-0637-44b4-b1bf-b76f5610a2bb-node-certs\") pod \"calico-node-8dvp8\" (UID: \"943edce0-0637-44b4-b1bf-b76f5610a2bb\") " pod="calico-system/calico-node-8dvp8" Jun 25 18:44:34.355553 kubelet[3151]: I0625 18:44:34.353304 3151 topology_manager.go:215] "Topology Admit Handler" podUID="21944943-4c4a-467a-8098-f49bb7649567" podNamespace="calico-system" podName="csi-node-driver-dgqjn" Jun 25 18:44:34.355553 kubelet[3151]: E0625 18:44:34.353665 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:34.407100 kubelet[3151]: I0625 18:44:34.406707 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21944943-4c4a-467a-8098-f49bb7649567-socket-dir\") pod \"csi-node-driver-dgqjn\" (UID: \"21944943-4c4a-467a-8098-f49bb7649567\") " pod="calico-system/csi-node-driver-dgqjn" Jun 25 18:44:34.408084 kubelet[3151]: E0625 18:44:34.408049 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.408084 kubelet[3151]: W0625 18:44:34.408078 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.408246 kubelet[3151]: E0625 18:44:34.408118 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.408246 kubelet[3151]: I0625 18:44:34.408148 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21944943-4c4a-467a-8098-f49bb7649567-registration-dir\") pod \"csi-node-driver-dgqjn\" (UID: \"21944943-4c4a-467a-8098-f49bb7649567\") " pod="calico-system/csi-node-driver-dgqjn" Jun 25 18:44:34.408903 kubelet[3151]: E0625 18:44:34.408403 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.408903 kubelet[3151]: W0625 18:44:34.408417 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.408903 kubelet[3151]: E0625 18:44:34.408435 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.408903 kubelet[3151]: E0625 18:44:34.408673 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.408903 kubelet[3151]: W0625 18:44:34.408684 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.408903 kubelet[3151]: E0625 18:44:34.408715 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.409231 kubelet[3151]: E0625 18:44:34.409000 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.409231 kubelet[3151]: W0625 18:44:34.409011 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.409334 kubelet[3151]: E0625 18:44:34.409289 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.409895 kubelet[3151]: E0625 18:44:34.409837 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.409895 kubelet[3151]: W0625 18:44:34.409855 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.410115 kubelet[3151]: E0625 18:44:34.410094 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.411234 kubelet[3151]: E0625 18:44:34.411189 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.411234 kubelet[3151]: W0625 18:44:34.411223 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.413687 kubelet[3151]: E0625 18:44:34.411242 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.413890 kubelet[3151]: E0625 18:44:34.413876 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.413983 kubelet[3151]: W0625 18:44:34.413970 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.414346 kubelet[3151]: E0625 18:44:34.414063 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.414673 kubelet[3151]: E0625 18:44:34.414659 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.414774 kubelet[3151]: W0625 18:44:34.414760 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.414904 kubelet[3151]: E0625 18:44:34.414855 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.415255 kubelet[3151]: E0625 18:44:34.415161 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.415255 kubelet[3151]: W0625 18:44:34.415175 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.415255 kubelet[3151]: E0625 18:44:34.415202 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.415757 kubelet[3151]: E0625 18:44:34.415613 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.415757 kubelet[3151]: W0625 18:44:34.415627 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.415757 kubelet[3151]: E0625 18:44:34.415715 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.416236 kubelet[3151]: E0625 18:44:34.416055 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.416236 kubelet[3151]: W0625 18:44:34.416068 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.416236 kubelet[3151]: E0625 18:44:34.416096 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.416564 kubelet[3151]: E0625 18:44:34.416471 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.416564 kubelet[3151]: W0625 18:44:34.416484 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.416564 kubelet[3151]: E0625 18:44:34.416513 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.416564 kubelet[3151]: I0625 18:44:34.416541 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rld7d\" (UniqueName: \"kubernetes.io/projected/21944943-4c4a-467a-8098-f49bb7649567-kube-api-access-rld7d\") pod \"csi-node-driver-dgqjn\" (UID: \"21944943-4c4a-467a-8098-f49bb7649567\") " pod="calico-system/csi-node-driver-dgqjn" Jun 25 18:44:34.417164 kubelet[3151]: E0625 18:44:34.417011 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.417164 kubelet[3151]: W0625 18:44:34.417025 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.417164 kubelet[3151]: E0625 18:44:34.417056 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.417684 kubelet[3151]: E0625 18:44:34.417517 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.417684 kubelet[3151]: W0625 18:44:34.417531 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.417684 kubelet[3151]: E0625 18:44:34.417566 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.418172 kubelet[3151]: E0625 18:44:34.418011 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.418172 kubelet[3151]: W0625 18:44:34.418025 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.418172 kubelet[3151]: E0625 18:44:34.418054 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.419416 kubelet[3151]: E0625 18:44:34.419243 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.419416 kubelet[3151]: W0625 18:44:34.419258 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.419416 kubelet[3151]: E0625 18:44:34.419361 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.419737 kubelet[3151]: E0625 18:44:34.419641 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.419737 kubelet[3151]: W0625 18:44:34.419654 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.419900 kubelet[3151]: E0625 18:44:34.419849 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.420246 kubelet[3151]: E0625 18:44:34.420147 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.420246 kubelet[3151]: W0625 18:44:34.420160 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.420519 kubelet[3151]: E0625 18:44:34.420437 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.420519 kubelet[3151]: E0625 18:44:34.420495 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.420519 kubelet[3151]: W0625 18:44:34.420504 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.420834 kubelet[3151]: E0625 18:44:34.420686 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.421149 kubelet[3151]: E0625 18:44:34.421049 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.421149 kubelet[3151]: W0625 18:44:34.421063 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.421149 kubelet[3151]: E0625 18:44:34.421091 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.421847 kubelet[3151]: E0625 18:44:34.421713 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.421847 kubelet[3151]: W0625 18:44:34.421727 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.421847 kubelet[3151]: E0625 18:44:34.421745 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.422457 kubelet[3151]: E0625 18:44:34.422238 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.422457 kubelet[3151]: W0625 18:44:34.422252 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.422457 kubelet[3151]: E0625 18:44:34.422269 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.422767 kubelet[3151]: E0625 18:44:34.422657 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.422767 kubelet[3151]: W0625 18:44:34.422672 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.422767 kubelet[3151]: E0625 18:44:34.422690 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.423744 kubelet[3151]: E0625 18:44:34.423513 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.423744 kubelet[3151]: W0625 18:44:34.423528 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.423744 kubelet[3151]: E0625 18:44:34.423547 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.424053 kubelet[3151]: E0625 18:44:34.424040 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.424230 kubelet[3151]: W0625 18:44:34.424122 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.424230 kubelet[3151]: E0625 18:44:34.424147 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.424504 kubelet[3151]: E0625 18:44:34.424490 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.424714 kubelet[3151]: W0625 18:44:34.424587 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.424714 kubelet[3151]: E0625 18:44:34.424608 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.424964 kubelet[3151]: E0625 18:44:34.424952 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.425040 kubelet[3151]: W0625 18:44:34.425029 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.425309 kubelet[3151]: E0625 18:44:34.425099 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.425556 kubelet[3151]: E0625 18:44:34.425542 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.425963 kubelet[3151]: W0625 18:44:34.425788 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.425963 kubelet[3151]: E0625 18:44:34.425813 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.425963 kubelet[3151]: I0625 18:44:34.425842 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/21944943-4c4a-467a-8098-f49bb7649567-varrun\") pod \"csi-node-driver-dgqjn\" (UID: \"21944943-4c4a-467a-8098-f49bb7649567\") " pod="calico-system/csi-node-driver-dgqjn" Jun 25 18:44:34.426959 kubelet[3151]: E0625 18:44:34.426221 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.426959 kubelet[3151]: W0625 18:44:34.426235 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.426959 kubelet[3151]: E0625 18:44:34.426252 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.427506 kubelet[3151]: E0625 18:44:34.427332 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.427506 kubelet[3151]: W0625 18:44:34.427346 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.427506 kubelet[3151]: E0625 18:44:34.427380 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.435724 kubelet[3151]: E0625 18:44:34.432401 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.435724 kubelet[3151]: W0625 18:44:34.432415 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.435724 kubelet[3151]: E0625 18:44:34.432432 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.435880 containerd[1720]: time="2024-06-25T18:44:34.435205222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ffb859b44-crfhn,Uid:ed3e2768-7843-4613-af83-e2d27541363a,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:34.437819 kubelet[3151]: E0625 18:44:34.437669 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.437819 kubelet[3151]: W0625 18:44:34.437685 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.437819 kubelet[3151]: E0625 18:44:34.437704 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.438422 kubelet[3151]: E0625 18:44:34.438181 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.438422 kubelet[3151]: W0625 18:44:34.438198 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.438696 kubelet[3151]: E0625 18:44:34.438522 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.439400 kubelet[3151]: E0625 18:44:34.439341 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.439400 kubelet[3151]: W0625 18:44:34.439381 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.439816 kubelet[3151]: E0625 18:44:34.439626 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.440184 kubelet[3151]: E0625 18:44:34.439982 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.440184 kubelet[3151]: W0625 18:44:34.439996 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.440184 kubelet[3151]: E0625 18:44:34.440025 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.441080 kubelet[3151]: E0625 18:44:34.440940 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.441080 kubelet[3151]: W0625 18:44:34.440960 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.441080 kubelet[3151]: E0625 18:44:34.440981 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.441823 kubelet[3151]: E0625 18:44:34.441639 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.441823 kubelet[3151]: W0625 18:44:34.441654 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.441823 kubelet[3151]: E0625 18:44:34.441701 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.442311 kubelet[3151]: E0625 18:44:34.442205 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.442311 kubelet[3151]: W0625 18:44:34.442218 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.442589 kubelet[3151]: E0625 18:44:34.442425 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.442589 kubelet[3151]: I0625 18:44:34.442459 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21944943-4c4a-467a-8098-f49bb7649567-kubelet-dir\") pod \"csi-node-driver-dgqjn\" (UID: \"21944943-4c4a-467a-8098-f49bb7649567\") " pod="calico-system/csi-node-driver-dgqjn" Jun 25 18:44:34.443045 kubelet[3151]: E0625 18:44:34.442825 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.443045 kubelet[3151]: W0625 18:44:34.442844 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.443045 kubelet[3151]: E0625 18:44:34.442936 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.443330 kubelet[3151]: E0625 18:44:34.443300 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.443330 kubelet[3151]: W0625 18:44:34.443315 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.443594 kubelet[3151]: E0625 18:44:34.443553 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.443877 kubelet[3151]: E0625 18:44:34.443836 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.443877 kubelet[3151]: W0625 18:44:34.443863 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.444082 kubelet[3151]: E0625 18:44:34.444048 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.444446 kubelet[3151]: E0625 18:44:34.444363 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.444446 kubelet[3151]: W0625 18:44:34.444383 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.444668 kubelet[3151]: E0625 18:44:34.444534 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.445132 kubelet[3151]: E0625 18:44:34.445099 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.445132 kubelet[3151]: W0625 18:44:34.445112 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.445384 kubelet[3151]: E0625 18:44:34.445335 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.446143 kubelet[3151]: E0625 18:44:34.446035 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.446143 kubelet[3151]: W0625 18:44:34.446052 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.447671 kubelet[3151]: E0625 18:44:34.447570 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.447671 kubelet[3151]: W0625 18:44:34.447585 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.448068 kubelet[3151]: E0625 18:44:34.447969 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.448068 kubelet[3151]: W0625 18:44:34.447983 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.448376 kubelet[3151]: E0625 18:44:34.448302 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.448376 kubelet[3151]: W0625 18:44:34.448315 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.448795 kubelet[3151]: E0625 18:44:34.448666 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.448795 kubelet[3151]: W0625 18:44:34.448679 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.448795 kubelet[3151]: E0625 18:44:34.448696 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.448795 kubelet[3151]: E0625 18:44:34.448717 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.449650 kubelet[3151]: E0625 18:44:34.449508 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.449650 kubelet[3151]: W0625 18:44:34.449522 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.449650 kubelet[3151]: E0625 18:44:34.449540 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.450378 kubelet[3151]: E0625 18:44:34.450277 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.450378 kubelet[3151]: W0625 18:44:34.450292 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.450378 kubelet[3151]: E0625 18:44:34.450308 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.450378 kubelet[3151]: E0625 18:44:34.450333 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.451780 kubelet[3151]: E0625 18:44:34.451579 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.451780 kubelet[3151]: W0625 18:44:34.451594 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.451780 kubelet[3151]: E0625 18:44:34.451611 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.452118 kubelet[3151]: E0625 18:44:34.451991 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.452118 kubelet[3151]: W0625 18:44:34.452004 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.452118 kubelet[3151]: E0625 18:44:34.452021 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.452118 kubelet[3151]: E0625 18:44:34.452045 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.454380 kubelet[3151]: E0625 18:44:34.453163 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.454380 kubelet[3151]: E0625 18:44:34.453240 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.454380 kubelet[3151]: W0625 18:44:34.453249 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.454380 kubelet[3151]: E0625 18:44:34.453267 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.454826 kubelet[3151]: E0625 18:44:34.454729 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.454826 kubelet[3151]: W0625 18:44:34.454744 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.455105 kubelet[3151]: E0625 18:44:34.454962 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.457620 kubelet[3151]: E0625 18:44:34.457605 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.459863 kubelet[3151]: W0625 18:44:34.457667 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.459863 kubelet[3151]: E0625 18:44:34.457786 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.459863 kubelet[3151]: E0625 18:44:34.459433 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.459863 kubelet[3151]: W0625 18:44:34.459445 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.459863 kubelet[3151]: E0625 18:44:34.459461 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.459863 kubelet[3151]: E0625 18:44:34.459703 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.459863 kubelet[3151]: W0625 18:44:34.459716 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.459863 kubelet[3151]: E0625 18:44:34.459734 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.461130 kubelet[3151]: E0625 18:44:34.460408 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.461130 kubelet[3151]: W0625 18:44:34.460422 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.461130 kubelet[3151]: E0625 18:44:34.460440 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.462562 kubelet[3151]: E0625 18:44:34.462542 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.462657 kubelet[3151]: W0625 18:44:34.462644 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.462743 kubelet[3151]: E0625 18:44:34.462732 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.498414 containerd[1720]: time="2024-06-25T18:44:34.496485060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:34.498414 containerd[1720]: time="2024-06-25T18:44:34.496570861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:34.498414 containerd[1720]: time="2024-06-25T18:44:34.496688961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:34.498414 containerd[1720]: time="2024-06-25T18:44:34.496709461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:34.533553 systemd[1]: Started cri-containerd-3b4cd2a7997f95f9503c0b276f7e187833b9b0596d0f03aebce6c079f0b2b197.scope - libcontainer container 3b4cd2a7997f95f9503c0b276f7e187833b9b0596d0f03aebce6c079f0b2b197. Jun 25 18:44:34.539267 containerd[1720]: time="2024-06-25T18:44:34.538538824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8dvp8,Uid:943edce0-0637-44b4-b1bf-b76f5610a2bb,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:34.545210 kubelet[3151]: E0625 18:44:34.545186 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.545705 kubelet[3151]: W0625 18:44:34.545678 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.545836 kubelet[3151]: E0625 18:44:34.545824 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.546383 kubelet[3151]: E0625 18:44:34.546367 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.546485 kubelet[3151]: W0625 18:44:34.546472 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.546572 kubelet[3151]: E0625 18:44:34.546563 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.546942 kubelet[3151]: E0625 18:44:34.546848 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.547040 kubelet[3151]: W0625 18:44:34.547028 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.547121 kubelet[3151]: E0625 18:44:34.547112 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.547426 kubelet[3151]: E0625 18:44:34.547412 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.547510 kubelet[3151]: W0625 18:44:34.547499 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.547602 kubelet[3151]: E0625 18:44:34.547593 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.548021 kubelet[3151]: E0625 18:44:34.548005 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.548113 kubelet[3151]: W0625 18:44:34.548102 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.548259 kubelet[3151]: E0625 18:44:34.548185 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.548582 kubelet[3151]: E0625 18:44:34.548495 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.548582 kubelet[3151]: W0625 18:44:34.548508 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.548690 kubelet[3151]: E0625 18:44:34.548588 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.548967 kubelet[3151]: E0625 18:44:34.548874 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.548967 kubelet[3151]: W0625 18:44:34.548886 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.549161 kubelet[3151]: E0625 18:44:34.549098 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.549302 kubelet[3151]: E0625 18:44:34.549290 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.549491 kubelet[3151]: W0625 18:44:34.549378 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.549491 kubelet[3151]: E0625 18:44:34.549418 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.550082 kubelet[3151]: E0625 18:44:34.549962 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.550082 kubelet[3151]: W0625 18:44:34.549978 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.550082 kubelet[3151]: E0625 18:44:34.550023 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.550333 kubelet[3151]: E0625 18:44:34.550303 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.550333 kubelet[3151]: W0625 18:44:34.550316 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.550613 kubelet[3151]: E0625 18:44:34.550545 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.550784 kubelet[3151]: E0625 18:44:34.550771 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.550964 kubelet[3151]: W0625 18:44:34.550860 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.550964 kubelet[3151]: E0625 18:44:34.550911 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.551200 kubelet[3151]: E0625 18:44:34.551139 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.551200 kubelet[3151]: W0625 18:44:34.551150 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.551200 kubelet[3151]: E0625 18:44:34.551197 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.551685 kubelet[3151]: E0625 18:44:34.551578 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.551685 kubelet[3151]: W0625 18:44:34.551590 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.551809 kubelet[3151]: E0625 18:44:34.551748 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.552518 kubelet[3151]: E0625 18:44:34.551989 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.552518 kubelet[3151]: W0625 18:44:34.552003 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.552518 kubelet[3151]: E0625 18:44:34.552091 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.552518 kubelet[3151]: E0625 18:44:34.552236 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.552518 kubelet[3151]: W0625 18:44:34.552244 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.552518 kubelet[3151]: E0625 18:44:34.552280 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.553390 kubelet[3151]: E0625 18:44:34.553141 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.553390 kubelet[3151]: W0625 18:44:34.553155 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.553614 kubelet[3151]: E0625 18:44:34.553504 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.553973 kubelet[3151]: E0625 18:44:34.553847 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.553973 kubelet[3151]: W0625 18:44:34.553861 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.554239 kubelet[3151]: E0625 18:44:34.554115 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.554493 kubelet[3151]: E0625 18:44:34.554391 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.554493 kubelet[3151]: W0625 18:44:34.554407 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.556003 kubelet[3151]: E0625 18:44:34.554546 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.556003 kubelet[3151]: E0625 18:44:34.554704 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.556003 kubelet[3151]: W0625 18:44:34.554714 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.556003 kubelet[3151]: E0625 18:44:34.554744 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.556003 kubelet[3151]: E0625 18:44:34.554908 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.556003 kubelet[3151]: W0625 18:44:34.554917 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.556003 kubelet[3151]: E0625 18:44:34.555085 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.556003 kubelet[3151]: W0625 18:44:34.555094 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.556003 kubelet[3151]: E0625 18:44:34.555278 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.556003 kubelet[3151]: W0625 18:44:34.555287 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.557316 kubelet[3151]: E0625 18:44:34.555301 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.557316 kubelet[3151]: E0625 18:44:34.555328 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.557316 kubelet[3151]: E0625 18:44:34.555477 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.557316 kubelet[3151]: E0625 18:44:34.555799 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.557316 kubelet[3151]: W0625 18:44:34.555809 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.557316 kubelet[3151]: E0625 18:44:34.555844 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.557316 kubelet[3151]: E0625 18:44:34.556708 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.557316 kubelet[3151]: W0625 18:44:34.556720 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.557316 kubelet[3151]: E0625 18:44:34.556742 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.558596 kubelet[3151]: E0625 18:44:34.558579 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.558596 kubelet[3151]: W0625 18:44:34.558597 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.558726 kubelet[3151]: E0625 18:44:34.558615 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.571455 kubelet[3151]: E0625 18:44:34.570742 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:34.571455 kubelet[3151]: W0625 18:44:34.570861 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:34.571455 kubelet[3151]: E0625 18:44:34.570886 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:34.591412 containerd[1720]: time="2024-06-25T18:44:34.591302230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:34.591570 containerd[1720]: time="2024-06-25T18:44:34.591430230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:34.591570 containerd[1720]: time="2024-06-25T18:44:34.591500730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:34.591666 containerd[1720]: time="2024-06-25T18:44:34.591597931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:34.612570 containerd[1720]: time="2024-06-25T18:44:34.612219411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ffb859b44-crfhn,Uid:ed3e2768-7843-4613-af83-e2d27541363a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b4cd2a7997f95f9503c0b276f7e187833b9b0596d0f03aebce6c079f0b2b197\"" Jun 25 18:44:34.616431 containerd[1720]: time="2024-06-25T18:44:34.615684624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:44:34.622051 systemd[1]: Started cri-containerd-644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434.scope - libcontainer container 644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434. Jun 25 18:44:34.650901 containerd[1720]: time="2024-06-25T18:44:34.650802761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8dvp8,Uid:943edce0-0637-44b4-b1bf-b76f5610a2bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434\"" Jun 25 18:44:35.867766 kubelet[3151]: E0625 18:44:35.865728 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:37.595139 containerd[1720]: time="2024-06-25T18:44:37.595089425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:37.597239 containerd[1720]: time="2024-06-25T18:44:37.597168633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:44:37.601015 containerd[1720]: time="2024-06-25T18:44:37.600958748Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:37.604642 containerd[1720]: time="2024-06-25T18:44:37.604608362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:37.605845 containerd[1720]: time="2024-06-25T18:44:37.605302565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.98957454s" Jun 25 18:44:37.605845 containerd[1720]: time="2024-06-25T18:44:37.605343365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:44:37.606559 containerd[1720]: time="2024-06-25T18:44:37.606524770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:44:37.626391 containerd[1720]: time="2024-06-25T18:44:37.626141346Z" level=info msg="CreateContainer within sandbox \"3b4cd2a7997f95f9503c0b276f7e187833b9b0596d0f03aebce6c079f0b2b197\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:44:37.662615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009421834.mount: Deactivated successfully. Jun 25 18:44:37.671550 containerd[1720]: time="2024-06-25T18:44:37.671510023Z" level=info msg="CreateContainer within sandbox \"3b4cd2a7997f95f9503c0b276f7e187833b9b0596d0f03aebce6c079f0b2b197\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"63dc29fa7510b49d3e7b5d305696154e0602139f7411e1b3051b3416cedabfe1\"" Jun 25 18:44:37.672145 containerd[1720]: time="2024-06-25T18:44:37.672119225Z" level=info msg="StartContainer for \"63dc29fa7510b49d3e7b5d305696154e0602139f7411e1b3051b3416cedabfe1\"" Jun 25 18:44:37.709533 systemd[1]: Started cri-containerd-63dc29fa7510b49d3e7b5d305696154e0602139f7411e1b3051b3416cedabfe1.scope - libcontainer container 63dc29fa7510b49d3e7b5d305696154e0602139f7411e1b3051b3416cedabfe1. Jun 25 18:44:37.760851 containerd[1720]: time="2024-06-25T18:44:37.760751270Z" level=info msg="StartContainer for \"63dc29fa7510b49d3e7b5d305696154e0602139f7411e1b3051b3416cedabfe1\" returns successfully" Jun 25 18:44:37.867756 kubelet[3151]: E0625 18:44:37.867643 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:38.028173 kubelet[3151]: E0625 18:44:38.027995 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.028173 kubelet[3151]: W0625 18:44:38.028019 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.028173 kubelet[3151]: E0625 18:44:38.028044 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.028793 kubelet[3151]: E0625 18:44:38.028765 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.029028 kubelet[3151]: W0625 18:44:38.028885 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.029028 kubelet[3151]: E0625 18:44:38.028913 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.029304 kubelet[3151]: E0625 18:44:38.029257 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.029304 kubelet[3151]: W0625 18:44:38.029269 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.029513 kubelet[3151]: E0625 18:44:38.029384 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.029812 kubelet[3151]: E0625 18:44:38.029781 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.029963 kubelet[3151]: W0625 18:44:38.029891 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.029963 kubelet[3151]: E0625 18:44:38.029911 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.030341 kubelet[3151]: E0625 18:44:38.030309 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.030341 kubelet[3151]: W0625 18:44:38.030320 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.030584 kubelet[3151]: E0625 18:44:38.030478 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.031015 kubelet[3151]: E0625 18:44:38.030947 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.031015 kubelet[3151]: W0625 18:44:38.030968 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.031015 kubelet[3151]: E0625 18:44:38.030988 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.031279 kubelet[3151]: E0625 18:44:38.031271 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.031279 kubelet[3151]: W0625 18:44:38.031283 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.031464 kubelet[3151]: E0625 18:44:38.031299 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.031578 kubelet[3151]: E0625 18:44:38.031557 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.031578 kubelet[3151]: W0625 18:44:38.031572 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.031744 kubelet[3151]: E0625 18:44:38.031588 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.031847 kubelet[3151]: E0625 18:44:38.031828 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.031847 kubelet[3151]: W0625 18:44:38.031841 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.031939 kubelet[3151]: E0625 18:44:38.031869 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.032100 kubelet[3151]: E0625 18:44:38.032091 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.032187 kubelet[3151]: W0625 18:44:38.032168 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.032246 kubelet[3151]: E0625 18:44:38.032192 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.032435 kubelet[3151]: E0625 18:44:38.032416 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.032435 kubelet[3151]: W0625 18:44:38.032429 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.032551 kubelet[3151]: E0625 18:44:38.032462 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.032705 kubelet[3151]: E0625 18:44:38.032689 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.032769 kubelet[3151]: W0625 18:44:38.032714 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.032769 kubelet[3151]: E0625 18:44:38.032730 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.032964 kubelet[3151]: E0625 18:44:38.032948 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.032964 kubelet[3151]: W0625 18:44:38.032963 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.033127 kubelet[3151]: E0625 18:44:38.032979 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.033222 kubelet[3151]: E0625 18:44:38.033203 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.033222 kubelet[3151]: W0625 18:44:38.033216 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.033407 kubelet[3151]: E0625 18:44:38.033231 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.033498 kubelet[3151]: E0625 18:44:38.033452 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.033498 kubelet[3151]: W0625 18:44:38.033462 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.033498 kubelet[3151]: E0625 18:44:38.033478 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.076302 kubelet[3151]: E0625 18:44:38.076276 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.076302 kubelet[3151]: W0625 18:44:38.076293 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.076572 kubelet[3151]: E0625 18:44:38.076319 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.076666 kubelet[3151]: E0625 18:44:38.076654 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.076714 kubelet[3151]: W0625 18:44:38.076666 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.076714 kubelet[3151]: E0625 18:44:38.076697 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.076998 kubelet[3151]: E0625 18:44:38.076979 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.076998 kubelet[3151]: W0625 18:44:38.076991 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.077178 kubelet[3151]: E0625 18:44:38.077013 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.077280 kubelet[3151]: E0625 18:44:38.077265 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.077280 kubelet[3151]: W0625 18:44:38.077277 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.077393 kubelet[3151]: E0625 18:44:38.077298 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.077562 kubelet[3151]: E0625 18:44:38.077545 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.077562 kubelet[3151]: W0625 18:44:38.077558 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.077809 kubelet[3151]: E0625 18:44:38.077579 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.077809 kubelet[3151]: E0625 18:44:38.077807 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.077901 kubelet[3151]: W0625 18:44:38.077818 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.077943 kubelet[3151]: E0625 18:44:38.077912 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.078094 kubelet[3151]: E0625 18:44:38.078077 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.078094 kubelet[3151]: W0625 18:44:38.078092 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.078278 kubelet[3151]: E0625 18:44:38.078156 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.078339 kubelet[3151]: E0625 18:44:38.078319 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.078339 kubelet[3151]: W0625 18:44:38.078335 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.078494 kubelet[3151]: E0625 18:44:38.078456 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.078613 kubelet[3151]: E0625 18:44:38.078597 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.078613 kubelet[3151]: W0625 18:44:38.078609 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.078730 kubelet[3151]: E0625 18:44:38.078630 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.079278 kubelet[3151]: E0625 18:44:38.079062 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.079278 kubelet[3151]: W0625 18:44:38.079087 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.079278 kubelet[3151]: E0625 18:44:38.079111 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.079484 kubelet[3151]: E0625 18:44:38.079423 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.079484 kubelet[3151]: W0625 18:44:38.079435 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.079484 kubelet[3151]: E0625 18:44:38.079460 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.079743 kubelet[3151]: E0625 18:44:38.079724 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.079743 kubelet[3151]: W0625 18:44:38.079739 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.079835 kubelet[3151]: E0625 18:44:38.079760 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.079992 kubelet[3151]: E0625 18:44:38.079974 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.079992 kubelet[3151]: W0625 18:44:38.079988 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.080160 kubelet[3151]: E0625 18:44:38.080016 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.080272 kubelet[3151]: E0625 18:44:38.080242 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.080272 kubelet[3151]: W0625 18:44:38.080258 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.080272 kubelet[3151]: E0625 18:44:38.080283 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.080551 kubelet[3151]: E0625 18:44:38.080538 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.080607 kubelet[3151]: W0625 18:44:38.080553 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.080607 kubelet[3151]: E0625 18:44:38.080573 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.080918 kubelet[3151]: E0625 18:44:38.080902 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.080918 kubelet[3151]: W0625 18:44:38.080914 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.081059 kubelet[3151]: E0625 18:44:38.081025 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.081245 kubelet[3151]: E0625 18:44:38.081207 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.081245 kubelet[3151]: W0625 18:44:38.081245 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.081384 kubelet[3151]: E0625 18:44:38.081263 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.081744 kubelet[3151]: E0625 18:44:38.081724 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:38.081744 kubelet[3151]: W0625 18:44:38.081738 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:38.081866 kubelet[3151]: E0625 18:44:38.081754 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:38.974795 kubelet[3151]: I0625 18:44:38.974758 3151 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:44:39.040514 kubelet[3151]: E0625 18:44:39.039977 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.040514 kubelet[3151]: W0625 18:44:39.040010 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.040514 kubelet[3151]: E0625 18:44:39.040068 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.041168 kubelet[3151]: E0625 18:44:39.040595 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.041168 kubelet[3151]: W0625 18:44:39.040609 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.041168 kubelet[3151]: E0625 18:44:39.040738 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.042137 kubelet[3151]: E0625 18:44:39.041587 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.042137 kubelet[3151]: W0625 18:44:39.041623 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.042137 kubelet[3151]: E0625 18:44:39.041649 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.042137 kubelet[3151]: E0625 18:44:39.042004 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.042137 kubelet[3151]: W0625 18:44:39.042018 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.042137 kubelet[3151]: E0625 18:44:39.042050 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.043224 kubelet[3151]: E0625 18:44:39.042854 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.043224 kubelet[3151]: W0625 18:44:39.042873 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.043224 kubelet[3151]: E0625 18:44:39.042893 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.043224 kubelet[3151]: E0625 18:44:39.043143 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.043224 kubelet[3151]: W0625 18:44:39.043154 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.043224 kubelet[3151]: E0625 18:44:39.043172 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.044195 kubelet[3151]: E0625 18:44:39.043813 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.044195 kubelet[3151]: W0625 18:44:39.043829 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.044195 kubelet[3151]: E0625 18:44:39.043849 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.044195 kubelet[3151]: E0625 18:44:39.044093 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.044195 kubelet[3151]: W0625 18:44:39.044107 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.044195 kubelet[3151]: E0625 18:44:39.044125 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.045138 kubelet[3151]: E0625 18:44:39.044788 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.045138 kubelet[3151]: W0625 18:44:39.044805 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.045138 kubelet[3151]: E0625 18:44:39.044826 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.045138 kubelet[3151]: E0625 18:44:39.045062 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.045138 kubelet[3151]: W0625 18:44:39.045074 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.045138 kubelet[3151]: E0625 18:44:39.045091 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.046096 kubelet[3151]: E0625 18:44:39.045724 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.046096 kubelet[3151]: W0625 18:44:39.045741 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.046096 kubelet[3151]: E0625 18:44:39.045764 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.046096 kubelet[3151]: E0625 18:44:39.046001 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.046096 kubelet[3151]: W0625 18:44:39.046013 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.046096 kubelet[3151]: E0625 18:44:39.046033 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.046797 kubelet[3151]: E0625 18:44:39.046576 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.046797 kubelet[3151]: W0625 18:44:39.046589 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.046797 kubelet[3151]: E0625 18:44:39.046604 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.046947 kubelet[3151]: E0625 18:44:39.046927 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.046947 kubelet[3151]: W0625 18:44:39.046942 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.047072 kubelet[3151]: E0625 18:44:39.046961 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.047246 kubelet[3151]: E0625 18:44:39.047220 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.047246 kubelet[3151]: W0625 18:44:39.047234 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.047433 kubelet[3151]: E0625 18:44:39.047250 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.082743 kubelet[3151]: E0625 18:44:39.082703 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.082923 kubelet[3151]: W0625 18:44:39.082745 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.082923 kubelet[3151]: E0625 18:44:39.082789 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.084571 kubelet[3151]: E0625 18:44:39.083392 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.084571 kubelet[3151]: W0625 18:44:39.083406 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.084571 kubelet[3151]: E0625 18:44:39.083438 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.084571 kubelet[3151]: E0625 18:44:39.083741 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.084571 kubelet[3151]: W0625 18:44:39.083753 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.084571 kubelet[3151]: E0625 18:44:39.083783 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.084571 kubelet[3151]: E0625 18:44:39.084085 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.084571 kubelet[3151]: W0625 18:44:39.084099 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.084571 kubelet[3151]: E0625 18:44:39.084128 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.084571 kubelet[3151]: E0625 18:44:39.084376 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.084967 kubelet[3151]: W0625 18:44:39.084388 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.084967 kubelet[3151]: E0625 18:44:39.084478 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.084967 kubelet[3151]: E0625 18:44:39.084630 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.084967 kubelet[3151]: W0625 18:44:39.084640 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.084967 kubelet[3151]: E0625 18:44:39.084726 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.084967 kubelet[3151]: E0625 18:44:39.084854 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.084967 kubelet[3151]: W0625 18:44:39.084863 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.084967 kubelet[3151]: E0625 18:44:39.084949 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.085309 kubelet[3151]: E0625 18:44:39.085090 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.085309 kubelet[3151]: W0625 18:44:39.085099 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.085309 kubelet[3151]: E0625 18:44:39.085130 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.085446 kubelet[3151]: E0625 18:44:39.085380 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.085446 kubelet[3151]: W0625 18:44:39.085391 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.085446 kubelet[3151]: E0625 18:44:39.085418 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.085843 kubelet[3151]: E0625 18:44:39.085707 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.085843 kubelet[3151]: W0625 18:44:39.085725 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.085843 kubelet[3151]: E0625 18:44:39.085743 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.088098 kubelet[3151]: E0625 18:44:39.085919 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.088098 kubelet[3151]: W0625 18:44:39.085930 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.088098 kubelet[3151]: E0625 18:44:39.085947 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.088098 kubelet[3151]: E0625 18:44:39.086253 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.088098 kubelet[3151]: W0625 18:44:39.086272 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.088098 kubelet[3151]: E0625 18:44:39.086297 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.088098 kubelet[3151]: E0625 18:44:39.086666 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.088098 kubelet[3151]: W0625 18:44:39.086679 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.088098 kubelet[3151]: E0625 18:44:39.087419 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.088098 kubelet[3151]: E0625 18:44:39.087783 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.089789 kubelet[3151]: W0625 18:44:39.087795 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.089789 kubelet[3151]: E0625 18:44:39.087878 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.089789 kubelet[3151]: E0625 18:44:39.089336 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.089789 kubelet[3151]: W0625 18:44:39.089346 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.089789 kubelet[3151]: E0625 18:44:39.089589 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.090448 kubelet[3151]: E0625 18:44:39.090288 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.090448 kubelet[3151]: W0625 18:44:39.090300 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.090945 kubelet[3151]: E0625 18:44:39.090639 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.091896 kubelet[3151]: E0625 18:44:39.091879 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.092300 kubelet[3151]: W0625 18:44:39.092271 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.092728 kubelet[3151]: E0625 18:44:39.092665 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.093230 kubelet[3151]: E0625 18:44:39.093127 3151 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:39.093230 kubelet[3151]: W0625 18:44:39.093155 3151 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:39.093230 kubelet[3151]: E0625 18:44:39.093174 3151 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:39.392380 containerd[1720]: time="2024-06-25T18:44:39.392306679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:39.394819 containerd[1720]: time="2024-06-25T18:44:39.394766145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:44:39.402155 containerd[1720]: time="2024-06-25T18:44:39.402077641Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:39.407446 containerd[1720]: time="2024-06-25T18:44:39.407391984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:39.408519 containerd[1720]: time="2024-06-25T18:44:39.407997100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.80131333s" Jun 25 18:44:39.408519 containerd[1720]: time="2024-06-25T18:44:39.408039602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:44:39.410453 containerd[1720]: time="2024-06-25T18:44:39.410118157Z" level=info msg="CreateContainer within sandbox \"644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:44:39.447870 containerd[1720]: time="2024-06-25T18:44:39.447819670Z" level=info msg="CreateContainer within sandbox \"644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52\"" Jun 25 18:44:39.449379 containerd[1720]: time="2024-06-25T18:44:39.448549889Z" level=info msg="StartContainer for \"2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52\"" Jun 25 18:44:39.498540 systemd[1]: Started cri-containerd-2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52.scope - libcontainer container 2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52. Jun 25 18:44:39.538137 containerd[1720]: time="2024-06-25T18:44:39.538085493Z" level=info msg="StartContainer for \"2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52\" returns successfully" Jun 25 18:44:39.548842 systemd[1]: cri-containerd-2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52.scope: Deactivated successfully. Jun 25 18:44:39.573031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52-rootfs.mount: Deactivated successfully. Jun 25 18:44:39.867069 kubelet[3151]: E0625 18:44:39.865759 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:40.272104 kubelet[3151]: I0625 18:44:39.988914 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6ffb859b44-crfhn" podStartSLOduration=2.997013247 podCreationTimestamp="2024-06-25 18:44:34 +0000 UTC" firstStartedPulling="2024-06-25 18:44:34.613916118 +0000 UTC m=+21.308434695" lastFinishedPulling="2024-06-25 18:44:37.605769567 +0000 UTC m=+24.300288144" observedRunningTime="2024-06-25 18:44:37.985315745 +0000 UTC m=+24.679834322" watchObservedRunningTime="2024-06-25 18:44:39.988866696 +0000 UTC m=+26.683385273" Jun 25 18:44:40.888774 containerd[1720]: time="2024-06-25T18:44:40.888704055Z" level=info msg="shim disconnected" id=2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52 namespace=k8s.io Jun 25 18:44:40.888774 containerd[1720]: time="2024-06-25T18:44:40.888765657Z" level=warning msg="cleaning up after shim disconnected" id=2adcbe7a14d1817053b13566eb725028ddbfe4dff305d640f554f9fdd872dd52 namespace=k8s.io Jun 25 18:44:40.888774 containerd[1720]: time="2024-06-25T18:44:40.888777757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:40.981874 containerd[1720]: time="2024-06-25T18:44:40.981823556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:44:41.866914 kubelet[3151]: E0625 18:44:41.866444 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:41.940797 kubelet[3151]: I0625 18:44:41.940389 3151 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:44:43.868589 kubelet[3151]: E0625 18:44:43.865912 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:45.867254 kubelet[3151]: E0625 18:44:45.865497 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:47.573588 containerd[1720]: time="2024-06-25T18:44:47.573540173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:47.575637 containerd[1720]: time="2024-06-25T18:44:47.575572866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:44:47.577818 containerd[1720]: time="2024-06-25T18:44:47.577766514Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:47.582236 containerd[1720]: time="2024-06-25T18:44:47.582160413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:47.583337 containerd[1720]: time="2024-06-25T18:44:47.582808334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.60086107s" Jun 25 18:44:47.583337 containerd[1720]: time="2024-06-25T18:44:47.582846147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:44:47.585260 containerd[1720]: time="2024-06-25T18:44:47.585053500Z" level=info msg="CreateContainer within sandbox \"644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:44:47.641688 containerd[1720]: time="2024-06-25T18:44:47.641643801Z" level=info msg="CreateContainer within sandbox \"644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8\"" Jun 25 18:44:47.642678 containerd[1720]: time="2024-06-25T18:44:47.642317731Z" level=info msg="StartContainer for \"b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8\"" Jun 25 18:44:47.678675 systemd[1]: Started cri-containerd-b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8.scope - libcontainer container b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8. Jun 25 18:44:47.709682 containerd[1720]: time="2024-06-25T18:44:47.709633691Z" level=info msg="StartContainer for \"b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8\" returns successfully" Jun 25 18:44:47.867319 kubelet[3151]: E0625 18:44:47.867180 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:49.055433 containerd[1720]: time="2024-06-25T18:44:49.055389503Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:44:49.057244 systemd[1]: cri-containerd-b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8.scope: Deactivated successfully. Jun 25 18:44:49.079236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8-rootfs.mount: Deactivated successfully. Jun 25 18:44:49.157123 kubelet[3151]: I0625 18:44:49.157043 3151 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:44:49.567536 kubelet[3151]: I0625 18:44:49.184104 3151 topology_manager.go:215] "Topology Admit Handler" podUID="2b332540-db59-490d-9479-ddef50c50dc9" podNamespace="kube-system" podName="coredns-5dd5756b68-lk7gj" Jun 25 18:44:49.567536 kubelet[3151]: I0625 18:44:49.190932 3151 topology_manager.go:215] "Topology Admit Handler" podUID="8d0c755f-a333-42d5-ab24-12d469b6f5b0" podNamespace="kube-system" podName="coredns-5dd5756b68-6vc7z" Jun 25 18:44:49.567536 kubelet[3151]: I0625 18:44:49.193272 3151 topology_manager.go:215] "Topology Admit Handler" podUID="178b8bc1-50ba-42a1-963c-23aa14c7c0d5" podNamespace="calico-system" podName="calico-kube-controllers-6c556964b9-mjpkm" Jun 25 18:44:49.567536 kubelet[3151]: I0625 18:44:49.251641 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d0c755f-a333-42d5-ab24-12d469b6f5b0-config-volume\") pod \"coredns-5dd5756b68-6vc7z\" (UID: \"8d0c755f-a333-42d5-ab24-12d469b6f5b0\") " pod="kube-system/coredns-5dd5756b68-6vc7z" Jun 25 18:44:49.567536 kubelet[3151]: I0625 18:44:49.251709 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjqx\" (UniqueName: \"kubernetes.io/projected/178b8bc1-50ba-42a1-963c-23aa14c7c0d5-kube-api-access-hpjqx\") pod \"calico-kube-controllers-6c556964b9-mjpkm\" (UID: \"178b8bc1-50ba-42a1-963c-23aa14c7c0d5\") " pod="calico-system/calico-kube-controllers-6c556964b9-mjpkm" Jun 25 18:44:49.567536 kubelet[3151]: I0625 18:44:49.251760 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b332540-db59-490d-9479-ddef50c50dc9-config-volume\") pod \"coredns-5dd5756b68-lk7gj\" (UID: \"2b332540-db59-490d-9479-ddef50c50dc9\") " pod="kube-system/coredns-5dd5756b68-lk7gj" Jun 25 18:44:49.197552 systemd[1]: Created slice kubepods-burstable-pod2b332540_db59_490d_9479_ddef50c50dc9.slice - libcontainer container kubepods-burstable-pod2b332540_db59_490d_9479_ddef50c50dc9.slice. Jun 25 18:44:49.568077 kubelet[3151]: I0625 18:44:49.251854 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwthb\" (UniqueName: \"kubernetes.io/projected/2b332540-db59-490d-9479-ddef50c50dc9-kube-api-access-mwthb\") pod \"coredns-5dd5756b68-lk7gj\" (UID: \"2b332540-db59-490d-9479-ddef50c50dc9\") " pod="kube-system/coredns-5dd5756b68-lk7gj" Jun 25 18:44:49.568077 kubelet[3151]: I0625 18:44:49.251908 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztr4\" (UniqueName: \"kubernetes.io/projected/8d0c755f-a333-42d5-ab24-12d469b6f5b0-kube-api-access-kztr4\") pod \"coredns-5dd5756b68-6vc7z\" (UID: \"8d0c755f-a333-42d5-ab24-12d469b6f5b0\") " pod="kube-system/coredns-5dd5756b68-6vc7z" Jun 25 18:44:49.568077 kubelet[3151]: I0625 18:44:49.251945 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/178b8bc1-50ba-42a1-963c-23aa14c7c0d5-tigera-ca-bundle\") pod \"calico-kube-controllers-6c556964b9-mjpkm\" (UID: \"178b8bc1-50ba-42a1-963c-23aa14c7c0d5\") " pod="calico-system/calico-kube-controllers-6c556964b9-mjpkm" Jun 25 18:44:49.206402 systemd[1]: Created slice kubepods-burstable-pod8d0c755f_a333_42d5_ab24_12d469b6f5b0.slice - libcontainer container kubepods-burstable-pod8d0c755f_a333_42d5_ab24_12d469b6f5b0.slice. Jun 25 18:44:49.211384 systemd[1]: Created slice kubepods-besteffort-pod178b8bc1_50ba_42a1_963c_23aa14c7c0d5.slice - libcontainer container kubepods-besteffort-pod178b8bc1_50ba_42a1_963c_23aa14c7c0d5.slice. Jun 25 18:44:49.872404 containerd[1720]: time="2024-06-25T18:44:49.872277284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lk7gj,Uid:2b332540-db59-490d-9479-ddef50c50dc9,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:49.872623 systemd[1]: Created slice kubepods-besteffort-pod21944943_4c4a_467a_8098_f49bb7649567.slice - libcontainer container kubepods-besteffort-pod21944943_4c4a_467a_8098_f49bb7649567.slice. Jun 25 18:44:49.874040 containerd[1720]: time="2024-06-25T18:44:49.873876791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c556964b9-mjpkm,Uid:178b8bc1-50ba-42a1-963c-23aa14c7c0d5,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:49.874238 containerd[1720]: time="2024-06-25T18:44:49.873895392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6vc7z,Uid:8d0c755f-a333-42d5-ab24-12d469b6f5b0,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:49.877225 containerd[1720]: time="2024-06-25T18:44:49.876979706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dgqjn,Uid:21944943-4c4a-467a-8098-f49bb7649567,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:50.736087 containerd[1720]: time="2024-06-25T18:44:50.735996887Z" level=info msg="shim disconnected" id=b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8 namespace=k8s.io Jun 25 18:44:50.736561 containerd[1720]: time="2024-06-25T18:44:50.736141388Z" level=warning msg="cleaning up after shim disconnected" id=b44f7c44ac3dcb581393ace18c21ef0e11f328324a3a335040b2f7cad8d725c8 namespace=k8s.io Jun 25 18:44:50.736561 containerd[1720]: time="2024-06-25T18:44:50.736160888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:50.989596 containerd[1720]: time="2024-06-25T18:44:50.989436792Z" level=error msg="Failed to destroy network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:50.989926 containerd[1720]: time="2024-06-25T18:44:50.989805293Z" level=error msg="encountered an error cleaning up failed sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:50.989926 containerd[1720]: time="2024-06-25T18:44:50.989877894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lk7gj,Uid:2b332540-db59-490d-9479-ddef50c50dc9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:50.990587 kubelet[3151]: E0625 18:44:50.990495 3151 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:50.990587 kubelet[3151]: E0625 18:44:50.990571 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lk7gj" Jun 25 18:44:50.991477 kubelet[3151]: E0625 18:44:50.990599 3151 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lk7gj" Jun 25 18:44:50.991477 kubelet[3151]: E0625 18:44:50.990676 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-lk7gj_kube-system(2b332540-db59-490d-9479-ddef50c50dc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-lk7gj_kube-system(2b332540-db59-490d-9479-ddef50c50dc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lk7gj" podUID="2b332540-db59-490d-9479-ddef50c50dc9" Jun 25 18:44:51.012468 containerd[1720]: time="2024-06-25T18:44:51.010700693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:44:51.013026 kubelet[3151]: I0625 18:44:51.012945 3151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:44:51.014219 containerd[1720]: time="2024-06-25T18:44:51.014177609Z" level=info msg="StopPodSandbox for \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\"" Jun 25 18:44:51.015232 containerd[1720]: time="2024-06-25T18:44:51.015201014Z" level=info msg="Ensure that sandbox d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd in task-service has been cleanup successfully" Jun 25 18:44:51.028572 containerd[1720]: time="2024-06-25T18:44:51.028527777Z" level=error msg="Failed to destroy network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.029233 containerd[1720]: time="2024-06-25T18:44:51.029098180Z" level=error msg="encountered an error cleaning up failed sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.029446 containerd[1720]: time="2024-06-25T18:44:51.029259181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6vc7z,Uid:8d0c755f-a333-42d5-ab24-12d469b6f5b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.029712 kubelet[3151]: E0625 18:44:51.029688 3151 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.029795 kubelet[3151]: E0625 18:44:51.029747 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-6vc7z" Jun 25 18:44:51.029795 kubelet[3151]: E0625 18:44:51.029775 3151 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-6vc7z" Jun 25 18:44:51.029882 kubelet[3151]: E0625 18:44:51.029837 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-6vc7z_kube-system(8d0c755f-a333-42d5-ab24-12d469b6f5b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-6vc7z_kube-system(8d0c755f-a333-42d5-ab24-12d469b6f5b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-6vc7z" podUID="8d0c755f-a333-42d5-ab24-12d469b6f5b0" Jun 25 18:44:51.053120 containerd[1720]: time="2024-06-25T18:44:51.053062794Z" level=error msg="Failed to destroy network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.053890 containerd[1720]: time="2024-06-25T18:44:51.053834498Z" level=error msg="encountered an error cleaning up failed sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.053998 containerd[1720]: time="2024-06-25T18:44:51.053915998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c556964b9-mjpkm,Uid:178b8bc1-50ba-42a1-963c-23aa14c7c0d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.054202 kubelet[3151]: E0625 18:44:51.054143 3151 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.054295 kubelet[3151]: E0625 18:44:51.054205 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c556964b9-mjpkm" Jun 25 18:44:51.054295 kubelet[3151]: E0625 18:44:51.054234 3151 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c556964b9-mjpkm" Jun 25 18:44:51.054581 kubelet[3151]: E0625 18:44:51.054294 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c556964b9-mjpkm_calico-system(178b8bc1-50ba-42a1-963c-23aa14c7c0d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c556964b9-mjpkm_calico-system(178b8bc1-50ba-42a1-963c-23aa14c7c0d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c556964b9-mjpkm" podUID="178b8bc1-50ba-42a1-963c-23aa14c7c0d5" Jun 25 18:44:51.060249 containerd[1720]: time="2024-06-25T18:44:51.060150428Z" level=error msg="Failed to destroy network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.060771 containerd[1720]: time="2024-06-25T18:44:51.060704630Z" level=error msg="encountered an error cleaning up failed sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.060905 containerd[1720]: time="2024-06-25T18:44:51.060877431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dgqjn,Uid:21944943-4c4a-467a-8098-f49bb7649567,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.061440 kubelet[3151]: E0625 18:44:51.061267 3151 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.061440 kubelet[3151]: E0625 18:44:51.061325 3151 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dgqjn" Jun 25 18:44:51.061440 kubelet[3151]: E0625 18:44:51.061380 3151 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dgqjn" Jun 25 18:44:51.061617 kubelet[3151]: E0625 18:44:51.061445 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dgqjn_calico-system(21944943-4c4a-467a-8098-f49bb7649567)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dgqjn_calico-system(21944943-4c4a-467a-8098-f49bb7649567)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:51.071250 containerd[1720]: time="2024-06-25T18:44:51.071206080Z" level=error msg="StopPodSandbox for \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\" failed" error="failed to destroy network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:51.071525 kubelet[3151]: E0625 18:44:51.071500 3151 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:44:51.071631 kubelet[3151]: E0625 18:44:51.071580 3151 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd"} Jun 25 18:44:51.071631 kubelet[3151]: E0625 18:44:51.071630 3151 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b332540-db59-490d-9479-ddef50c50dc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:44:51.071748 kubelet[3151]: E0625 18:44:51.071668 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b332540-db59-490d-9479-ddef50c50dc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lk7gj" podUID="2b332540-db59-490d-9479-ddef50c50dc9" Jun 25 18:44:51.829097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e-shm.mount: Deactivated successfully. Jun 25 18:44:51.829228 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362-shm.mount: Deactivated successfully. Jun 25 18:44:51.829338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3-shm.mount: Deactivated successfully. Jun 25 18:44:51.829501 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd-shm.mount: Deactivated successfully. Jun 25 18:44:52.016032 kubelet[3151]: I0625 18:44:52.015976 3151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:44:52.017682 containerd[1720]: time="2024-06-25T18:44:52.016906973Z" level=info msg="StopPodSandbox for \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\"" Jun 25 18:44:52.017682 containerd[1720]: time="2024-06-25T18:44:52.017192275Z" level=info msg="Ensure that sandbox 6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e in task-service has been cleanup successfully" Jun 25 18:44:52.021407 kubelet[3151]: I0625 18:44:52.020652 3151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:44:52.021563 containerd[1720]: time="2024-06-25T18:44:52.021446695Z" level=info msg="StopPodSandbox for \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\"" Jun 25 18:44:52.021842 containerd[1720]: time="2024-06-25T18:44:52.021674696Z" level=info msg="Ensure that sandbox fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362 in task-service has been cleanup successfully" Jun 25 18:44:52.022492 kubelet[3151]: I0625 18:44:52.022264 3151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:44:52.023135 containerd[1720]: time="2024-06-25T18:44:52.023107503Z" level=info msg="StopPodSandbox for \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\"" Jun 25 18:44:52.023374 containerd[1720]: time="2024-06-25T18:44:52.023314704Z" level=info msg="Ensure that sandbox cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3 in task-service has been cleanup successfully" Jun 25 18:44:52.082208 containerd[1720]: time="2024-06-25T18:44:52.080510375Z" level=error msg="StopPodSandbox for \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\" failed" error="failed to destroy network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:52.082208 containerd[1720]: time="2024-06-25T18:44:52.081149078Z" level=error msg="StopPodSandbox for \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\" failed" error="failed to destroy network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:52.082208 containerd[1720]: time="2024-06-25T18:44:52.082189483Z" level=error msg="StopPodSandbox for \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\" failed" error="failed to destroy network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:44:52.082936 kubelet[3151]: E0625 18:44:52.080831 3151 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:44:52.082936 kubelet[3151]: E0625 18:44:52.080878 3151 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3"} Jun 25 18:44:52.082936 kubelet[3151]: E0625 18:44:52.080930 3151 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"178b8bc1-50ba-42a1-963c-23aa14c7c0d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:44:52.082936 kubelet[3151]: E0625 18:44:52.080968 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"178b8bc1-50ba-42a1-963c-23aa14c7c0d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c556964b9-mjpkm" podUID="178b8bc1-50ba-42a1-963c-23aa14c7c0d5" Jun 25 18:44:52.083118 kubelet[3151]: E0625 18:44:52.081388 3151 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:44:52.083118 kubelet[3151]: E0625 18:44:52.081422 3151 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362"} Jun 25 18:44:52.083118 kubelet[3151]: E0625 18:44:52.081470 3151 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d0c755f-a333-42d5-ab24-12d469b6f5b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:44:52.083118 kubelet[3151]: E0625 18:44:52.081526 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d0c755f-a333-42d5-ab24-12d469b6f5b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-6vc7z" podUID="8d0c755f-a333-42d5-ab24-12d469b6f5b0" Jun 25 18:44:52.083450 kubelet[3151]: E0625 18:44:52.082439 3151 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:44:52.083450 kubelet[3151]: E0625 18:44:52.082469 3151 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e"} Jun 25 18:44:52.083450 kubelet[3151]: E0625 18:44:52.082518 3151 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21944943-4c4a-467a-8098-f49bb7649567\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:44:52.083450 kubelet[3151]: E0625 18:44:52.082552 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21944943-4c4a-467a-8098-f49bb7649567\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dgqjn" podUID="21944943-4c4a-467a-8098-f49bb7649567" Jun 25 18:44:58.341301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024549676.mount: Deactivated successfully. Jun 25 18:44:58.386817 containerd[1720]: time="2024-06-25T18:44:58.386673522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:58.389591 containerd[1720]: time="2024-06-25T18:44:58.389521590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:44:58.393636 containerd[1720]: time="2024-06-25T18:44:58.393563087Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:58.398717 containerd[1720]: time="2024-06-25T18:44:58.398600008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:58.399739 containerd[1720]: time="2024-06-25T18:44:58.399299824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.388496831s" Jun 25 18:44:58.399739 containerd[1720]: time="2024-06-25T18:44:58.399344625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:44:58.416851 containerd[1720]: time="2024-06-25T18:44:58.416666140Z" level=info msg="CreateContainer within sandbox \"644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:44:58.478657 containerd[1720]: time="2024-06-25T18:44:58.478558823Z" level=info msg="CreateContainer within sandbox \"644fd9d190646306f4ff5632defc79809ab7a11a8cb6d5423621bc8131698434\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"196e7f8a645a74003a801794f630ca1015493bc41fab89b163c32099ef5f806c\"" Jun 25 18:44:58.480072 containerd[1720]: time="2024-06-25T18:44:58.479243640Z" level=info msg="StartContainer for \"196e7f8a645a74003a801794f630ca1015493bc41fab89b163c32099ef5f806c\"" Jun 25 18:44:58.511521 systemd[1]: Started cri-containerd-196e7f8a645a74003a801794f630ca1015493bc41fab89b163c32099ef5f806c.scope - libcontainer container 196e7f8a645a74003a801794f630ca1015493bc41fab89b163c32099ef5f806c. Jun 25 18:44:58.544400 containerd[1720]: time="2024-06-25T18:44:58.543998791Z" level=info msg="StartContainer for \"196e7f8a645a74003a801794f630ca1015493bc41fab89b163c32099ef5f806c\" returns successfully" Jun 25 18:44:58.898954 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:44:58.899076 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:45:00.786651 systemd-networkd[1509]: vxlan.calico: Link UP Jun 25 18:45:00.786662 systemd-networkd[1509]: vxlan.calico: Gained carrier Jun 25 18:45:02.711539 systemd-networkd[1509]: vxlan.calico: Gained IPv6LL Jun 25 18:45:04.868369 containerd[1720]: time="2024-06-25T18:45:04.867079348Z" level=info msg="StopPodSandbox for \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\"" Jun 25 18:45:04.923262 kubelet[3151]: I0625 18:45:04.923213 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-8dvp8" podStartSLOduration=7.175445372 podCreationTimestamp="2024-06-25 18:44:34 +0000 UTC" firstStartedPulling="2024-06-25 18:44:34.651928066 +0000 UTC m=+21.346446643" lastFinishedPulling="2024-06-25 18:44:58.399643933 +0000 UTC m=+45.094162610" observedRunningTime="2024-06-25 18:44:59.068681661 +0000 UTC m=+45.763200338" watchObservedRunningTime="2024-06-25 18:45:04.923161339 +0000 UTC m=+51.617679916" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.922 [INFO][4466] k8s.go 608: Cleaning up netns ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.924 [INFO][4466] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" iface="eth0" netns="/var/run/netns/cni-7bdaa492-bf2d-cd44-e3d2-5a9b4fbcb0e7" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.925 [INFO][4466] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" iface="eth0" netns="/var/run/netns/cni-7bdaa492-bf2d-cd44-e3d2-5a9b4fbcb0e7" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.925 [INFO][4466] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" iface="eth0" netns="/var/run/netns/cni-7bdaa492-bf2d-cd44-e3d2-5a9b4fbcb0e7" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.925 [INFO][4466] k8s.go 615: Releasing IP address(es) ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.925 [INFO][4466] utils.go 188: Calico CNI releasing IP address ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.954 [INFO][4472] ipam_plugin.go 411: Releasing address using handleID ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.954 [INFO][4472] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.954 [INFO][4472] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.962 [WARNING][4472] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.962 [INFO][4472] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.965 [INFO][4472] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:04.969818 containerd[1720]: 2024-06-25 18:45:04.966 [INFO][4466] k8s.go 621: Teardown processing complete. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:04.971401 containerd[1720]: time="2024-06-25T18:45:04.970586985Z" level=info msg="TearDown network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\" successfully" Jun 25 18:45:04.971401 containerd[1720]: time="2024-06-25T18:45:04.970629186Z" level=info msg="StopPodSandbox for \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\" returns successfully" Jun 25 18:45:04.973751 containerd[1720]: time="2024-06-25T18:45:04.972497339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lk7gj,Uid:2b332540-db59-490d-9479-ddef50c50dc9,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:04.975315 systemd[1]: run-netns-cni\x2d7bdaa492\x2dbf2d\x2dcd44\x2de3d2\x2d5a9b4fbcb0e7.mount: Deactivated successfully. Jun 25 18:45:05.149565 systemd-networkd[1509]: calic3c4081628e: Link UP Jun 25 18:45:05.151220 systemd-networkd[1509]: calic3c4081628e: Gained carrier Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.065 [INFO][4479] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0 coredns-5dd5756b68- kube-system 2b332540-db59-490d-9479-ddef50c50dc9 721 0 2024-06-25 18:44:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-7f29c71dfa coredns-5dd5756b68-lk7gj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3c4081628e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Namespace="kube-system" Pod="coredns-5dd5756b68-lk7gj" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.065 [INFO][4479] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Namespace="kube-system" Pod="coredns-5dd5756b68-lk7gj" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.104 [INFO][4490] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" HandleID="k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.113 [INFO][4490] ipam_plugin.go 264: Auto assigning IP ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" HandleID="k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000593f40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-7f29c71dfa", "pod":"coredns-5dd5756b68-lk7gj", "timestamp":"2024-06-25 18:45:05.10433428 +0000 UTC"}, Hostname:"ci-4012.0.0-a-7f29c71dfa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.113 [INFO][4490] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.113 [INFO][4490] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.113 [INFO][4490] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-7f29c71dfa' Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.115 [INFO][4490] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.119 [INFO][4490] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.124 [INFO][4490] ipam.go 489: Trying affinity for 192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.126 [INFO][4490] ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.128 [INFO][4490] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.128 [INFO][4490] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.130 [INFO][4490] ipam.go 1685: Creating new handle: k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895 Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.134 [INFO][4490] ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.139 [INFO][4490] ipam.go 1216: Successfully claimed IPs: [192.168.103.193/26] block=192.168.103.192/26 handle="k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.140 [INFO][4490] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.193/26] handle="k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.140 [INFO][4490] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:05.185402 containerd[1720]: 2024-06-25 18:45:05.140 [INFO][4490] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.103.193/26] IPv6=[] ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" HandleID="k8s-pod-network.1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:05.187249 containerd[1720]: 2024-06-25 18:45:05.144 [INFO][4479] k8s.go 386: Populated endpoint ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Namespace="kube-system" Pod="coredns-5dd5756b68-lk7gj" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2b332540-db59-490d-9479-ddef50c50dc9", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"", Pod:"coredns-5dd5756b68-lk7gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3c4081628e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:05.187249 containerd[1720]: 2024-06-25 18:45:05.144 [INFO][4479] k8s.go 387: Calico CNI using IPs: [192.168.103.193/32] ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Namespace="kube-system" Pod="coredns-5dd5756b68-lk7gj" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:05.187249 containerd[1720]: 2024-06-25 18:45:05.144 [INFO][4479] dataplane_linux.go 68: Setting the host side veth name to calic3c4081628e ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Namespace="kube-system" Pod="coredns-5dd5756b68-lk7gj" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:05.187249 containerd[1720]: 2024-06-25 18:45:05.152 [INFO][4479] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Namespace="kube-system" Pod="coredns-5dd5756b68-lk7gj" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:05.187249 containerd[1720]: 2024-06-25 18:45:05.152 [INFO][4479] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Namespace="kube-system" Pod="coredns-5dd5756b68-lk7gj" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2b332540-db59-490d-9479-ddef50c50dc9", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895", Pod:"coredns-5dd5756b68-lk7gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3c4081628e", MAC:"16:20:25:cc:18:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:05.187249 containerd[1720]: 2024-06-25 18:45:05.181 [INFO][4479] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895" Namespace="kube-system" Pod="coredns-5dd5756b68-lk7gj" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:05.227970 containerd[1720]: time="2024-06-25T18:45:05.227862685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:05.229088 containerd[1720]: time="2024-06-25T18:45:05.228242296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:05.229088 containerd[1720]: time="2024-06-25T18:45:05.229039219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:05.229626 containerd[1720]: time="2024-06-25T18:45:05.229060919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:05.268570 systemd[1]: Started cri-containerd-1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895.scope - libcontainer container 1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895. Jun 25 18:45:05.323794 containerd[1720]: time="2024-06-25T18:45:05.323687705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lk7gj,Uid:2b332540-db59-490d-9479-ddef50c50dc9,Namespace:kube-system,Attempt:1,} returns sandbox id \"1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895\"" Jun 25 18:45:05.329153 containerd[1720]: time="2024-06-25T18:45:05.328944454Z" level=info msg="CreateContainer within sandbox \"1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:05.369062 containerd[1720]: time="2024-06-25T18:45:05.368923288Z" level=info msg="CreateContainer within sandbox \"1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97d7585c0279a6ce7fe938b8535d0f3396f1d69e0a10fdedf87cd3b0d140459f\"" Jun 25 18:45:05.370816 containerd[1720]: time="2024-06-25T18:45:05.370088421Z" level=info msg="StartContainer for \"97d7585c0279a6ce7fe938b8535d0f3396f1d69e0a10fdedf87cd3b0d140459f\"" Jun 25 18:45:05.404514 systemd[1]: Started cri-containerd-97d7585c0279a6ce7fe938b8535d0f3396f1d69e0a10fdedf87cd3b0d140459f.scope - libcontainer container 97d7585c0279a6ce7fe938b8535d0f3396f1d69e0a10fdedf87cd3b0d140459f. Jun 25 18:45:05.443463 containerd[1720]: time="2024-06-25T18:45:05.442812985Z" level=info msg="StartContainer for \"97d7585c0279a6ce7fe938b8535d0f3396f1d69e0a10fdedf87cd3b0d140459f\" returns successfully" Jun 25 18:45:05.869619 containerd[1720]: time="2024-06-25T18:45:05.869510994Z" level=info msg="StopPodSandbox for \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\"" Jun 25 18:45:05.871006 containerd[1720]: time="2024-06-25T18:45:05.869707399Z" level=info msg="StopPodSandbox for \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\"" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:05.950 [INFO][4608] k8s.go 608: Cleaning up netns ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:05.950 [INFO][4608] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" iface="eth0" netns="/var/run/netns/cni-00861593-fcba-06de-3243-f019cc4a1aac" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:05.952 [INFO][4608] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" iface="eth0" netns="/var/run/netns/cni-00861593-fcba-06de-3243-f019cc4a1aac" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:05.952 [INFO][4608] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" iface="eth0" netns="/var/run/netns/cni-00861593-fcba-06de-3243-f019cc4a1aac" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:05.952 [INFO][4608] k8s.go 615: Releasing IP address(es) ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:05.952 [INFO][4608] utils.go 188: Calico CNI releasing IP address ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:06.009 [INFO][4629] ipam_plugin.go 411: Releasing address using handleID ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:06.009 [INFO][4629] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:06.009 [INFO][4629] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:06.021 [WARNING][4629] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:06.021 [INFO][4629] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:06.023 [INFO][4629] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:06.027622 containerd[1720]: 2024-06-25 18:45:06.024 [INFO][4608] k8s.go 621: Teardown processing complete. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:06.032045 containerd[1720]: time="2024-06-25T18:45:06.031446689Z" level=info msg="TearDown network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\" successfully" Jun 25 18:45:06.032045 containerd[1720]: time="2024-06-25T18:45:06.031526691Z" level=info msg="StopPodSandbox for \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\" returns successfully" Jun 25 18:45:06.032258 containerd[1720]: time="2024-06-25T18:45:06.032223311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6vc7z,Uid:8d0c755f-a333-42d5-ab24-12d469b6f5b0,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:06.037179 systemd[1]: run-netns-cni\x2d00861593\x2dfcba\x2d06de\x2d3243\x2df019cc4a1aac.mount: Deactivated successfully. Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:05.986 [INFO][4616] k8s.go 608: Cleaning up netns ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:05.987 [INFO][4616] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" iface="eth0" netns="/var/run/netns/cni-3c1c24a2-bfce-e2ea-a1ff-ecc27808ac41" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:05.987 [INFO][4616] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" iface="eth0" netns="/var/run/netns/cni-3c1c24a2-bfce-e2ea-a1ff-ecc27808ac41" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:05.988 [INFO][4616] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" iface="eth0" netns="/var/run/netns/cni-3c1c24a2-bfce-e2ea-a1ff-ecc27808ac41" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:05.988 [INFO][4616] k8s.go 615: Releasing IP address(es) ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:05.988 [INFO][4616] utils.go 188: Calico CNI releasing IP address ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:06.050 [INFO][4636] ipam_plugin.go 411: Releasing address using handleID ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:06.051 [INFO][4636] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:06.051 [INFO][4636] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:06.057 [WARNING][4636] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:06.057 [INFO][4636] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:06.059 [INFO][4636] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:06.063938 containerd[1720]: 2024-06-25 18:45:06.060 [INFO][4616] k8s.go 621: Teardown processing complete. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:06.067954 containerd[1720]: time="2024-06-25T18:45:06.064177418Z" level=info msg="TearDown network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\" successfully" Jun 25 18:45:06.067954 containerd[1720]: time="2024-06-25T18:45:06.064206518Z" level=info msg="StopPodSandbox for \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\" returns successfully" Jun 25 18:45:06.072548 containerd[1720]: time="2024-06-25T18:45:06.070910909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c556964b9-mjpkm,Uid:178b8bc1-50ba-42a1-963c-23aa14c7c0d5,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:06.071294 systemd[1]: run-netns-cni\x2d3c1c24a2\x2dbfce\x2de2ea\x2da1ff\x2decc27808ac41.mount: Deactivated successfully. Jun 25 18:45:06.114281 kubelet[3151]: I0625 18:45:06.114234 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lk7gj" podStartSLOduration=39.114187237 podCreationTimestamp="2024-06-25 18:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:06.082619041 +0000 UTC m=+52.777137718" watchObservedRunningTime="2024-06-25 18:45:06.114187237 +0000 UTC m=+52.808705814" Jun 25 18:45:06.300426 systemd-networkd[1509]: cali2a7b402751f: Link UP Jun 25 18:45:06.302096 systemd-networkd[1509]: cali2a7b402751f: Gained carrier Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.176 [INFO][4645] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0 coredns-5dd5756b68- kube-system 8d0c755f-a333-42d5-ab24-12d469b6f5b0 732 0 2024-06-25 18:44:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-7f29c71dfa coredns-5dd5756b68-6vc7z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2a7b402751f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Namespace="kube-system" Pod="coredns-5dd5756b68-6vc7z" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.177 [INFO][4645] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Namespace="kube-system" Pod="coredns-5dd5756b68-6vc7z" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.227 [INFO][4671] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" HandleID="k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.237 [INFO][4671] ipam_plugin.go 264: Auto assigning IP ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" HandleID="k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec300), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-7f29c71dfa", "pod":"coredns-5dd5756b68-6vc7z", "timestamp":"2024-06-25 18:45:06.22708104 +0000 UTC"}, Hostname:"ci-4012.0.0-a-7f29c71dfa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.237 [INFO][4671] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.237 [INFO][4671] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.238 [INFO][4671] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-7f29c71dfa' Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.240 [INFO][4671] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.252 [INFO][4671] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.264 [INFO][4671] ipam.go 489: Trying affinity for 192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.267 [INFO][4671] ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.270 [INFO][4671] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.270 [INFO][4671] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.272 [INFO][4671] ipam.go 1685: Creating new handle: k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6 Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.277 [INFO][4671] ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.287 [INFO][4671] ipam.go 1216: Successfully claimed IPs: [192.168.103.194/26] block=192.168.103.192/26 handle="k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.287 [INFO][4671] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.194/26] handle="k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.287 [INFO][4671] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:06.322397 containerd[1720]: 2024-06-25 18:45:06.287 [INFO][4671] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.103.194/26] IPv6=[] ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" HandleID="k8s-pod-network.ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.323906 containerd[1720]: 2024-06-25 18:45:06.291 [INFO][4645] k8s.go 386: Populated endpoint ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Namespace="kube-system" Pod="coredns-5dd5756b68-6vc7z" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8d0c755f-a333-42d5-ab24-12d469b6f5b0", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"", Pod:"coredns-5dd5756b68-6vc7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a7b402751f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:06.323906 containerd[1720]: 2024-06-25 18:45:06.293 [INFO][4645] k8s.go 387: Calico CNI using IPs: [192.168.103.194/32] ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Namespace="kube-system" Pod="coredns-5dd5756b68-6vc7z" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.323906 containerd[1720]: 2024-06-25 18:45:06.293 [INFO][4645] dataplane_linux.go 68: Setting the host side veth name to cali2a7b402751f ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Namespace="kube-system" Pod="coredns-5dd5756b68-6vc7z" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.323906 containerd[1720]: 2024-06-25 18:45:06.301 [INFO][4645] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Namespace="kube-system" Pod="coredns-5dd5756b68-6vc7z" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.323906 containerd[1720]: 2024-06-25 18:45:06.301 [INFO][4645] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Namespace="kube-system" Pod="coredns-5dd5756b68-6vc7z" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8d0c755f-a333-42d5-ab24-12d469b6f5b0", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6", Pod:"coredns-5dd5756b68-6vc7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a7b402751f", MAC:"82:85:dc:37:3c:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:06.323906 containerd[1720]: 2024-06-25 18:45:06.317 [INFO][4645] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6" Namespace="kube-system" Pod="coredns-5dd5756b68-6vc7z" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:06.357070 systemd-networkd[1509]: cali80c2ce637eb: Link UP Jun 25 18:45:06.357328 systemd-networkd[1509]: cali80c2ce637eb: Gained carrier Jun 25 18:45:06.374617 containerd[1720]: time="2024-06-25T18:45:06.374526024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:06.376024 containerd[1720]: time="2024-06-25T18:45:06.375842362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:06.376133 containerd[1720]: time="2024-06-25T18:45:06.376054268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:06.376181 containerd[1720]: time="2024-06-25T18:45:06.376105069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.212 [INFO][4656] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0 calico-kube-controllers-6c556964b9- calico-system 178b8bc1-50ba-42a1-963c-23aa14c7c0d5 733 0 2024-06-25 18:44:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c556964b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.0.0-a-7f29c71dfa calico-kube-controllers-6c556964b9-mjpkm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali80c2ce637eb [] []}} ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Namespace="calico-system" Pod="calico-kube-controllers-6c556964b9-mjpkm" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.213 [INFO][4656] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Namespace="calico-system" Pod="calico-kube-controllers-6c556964b9-mjpkm" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.258 [INFO][4678] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" HandleID="k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.271 [INFO][4678] ipam_plugin.go 264: Auto assigning IP ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" HandleID="k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ad70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-7f29c71dfa", "pod":"calico-kube-controllers-6c556964b9-mjpkm", "timestamp":"2024-06-25 18:45:06.258968245 +0000 UTC"}, Hostname:"ci-4012.0.0-a-7f29c71dfa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.271 [INFO][4678] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.287 [INFO][4678] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.287 [INFO][4678] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-7f29c71dfa' Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.290 [INFO][4678] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.303 [INFO][4678] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.321 [INFO][4678] ipam.go 489: Trying affinity for 192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.323 [INFO][4678] ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.330 [INFO][4678] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.330 [INFO][4678] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.332 [INFO][4678] ipam.go 1685: Creating new handle: k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.338 [INFO][4678] ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.346 [INFO][4678] ipam.go 1216: Successfully claimed IPs: [192.168.103.195/26] block=192.168.103.192/26 handle="k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.347 [INFO][4678] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.195/26] handle="k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.347 [INFO][4678] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:06.389286 containerd[1720]: 2024-06-25 18:45:06.347 [INFO][4678] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.103.195/26] IPv6=[] ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" HandleID="k8s-pod-network.4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.392070 containerd[1720]: 2024-06-25 18:45:06.350 [INFO][4656] k8s.go 386: Populated endpoint ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Namespace="calico-system" Pod="calico-kube-controllers-6c556964b9-mjpkm" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0", GenerateName:"calico-kube-controllers-6c556964b9-", Namespace:"calico-system", SelfLink:"", UID:"178b8bc1-50ba-42a1-963c-23aa14c7c0d5", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c556964b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"", Pod:"calico-kube-controllers-6c556964b9-mjpkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80c2ce637eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:06.392070 containerd[1720]: 2024-06-25 18:45:06.350 [INFO][4656] k8s.go 387: Calico CNI using IPs: [192.168.103.195/32] ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Namespace="calico-system" Pod="calico-kube-controllers-6c556964b9-mjpkm" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.392070 containerd[1720]: 2024-06-25 18:45:06.350 [INFO][4656] dataplane_linux.go 68: Setting the host side veth name to cali80c2ce637eb ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Namespace="calico-system" Pod="calico-kube-controllers-6c556964b9-mjpkm" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.392070 containerd[1720]: 2024-06-25 18:45:06.356 [INFO][4656] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Namespace="calico-system" Pod="calico-kube-controllers-6c556964b9-mjpkm" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.392070 containerd[1720]: 2024-06-25 18:45:06.357 [INFO][4656] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Namespace="calico-system" Pod="calico-kube-controllers-6c556964b9-mjpkm" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0", GenerateName:"calico-kube-controllers-6c556964b9-", Namespace:"calico-system", SelfLink:"", UID:"178b8bc1-50ba-42a1-963c-23aa14c7c0d5", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c556964b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b", Pod:"calico-kube-controllers-6c556964b9-mjpkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80c2ce637eb", MAC:"12:fe:0d:7f:c7:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:06.392070 containerd[1720]: 2024-06-25 18:45:06.386 [INFO][4656] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b" Namespace="calico-system" Pod="calico-kube-controllers-6c556964b9-mjpkm" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:06.418960 systemd[1]: Started cri-containerd-ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6.scope - libcontainer container ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6. Jun 25 18:45:06.459400 containerd[1720]: time="2024-06-25T18:45:06.458580610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:06.459400 containerd[1720]: time="2024-06-25T18:45:06.458650312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:06.459400 containerd[1720]: time="2024-06-25T18:45:06.458696713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:06.459400 containerd[1720]: time="2024-06-25T18:45:06.458717714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:06.509833 systemd[1]: Started cri-containerd-4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b.scope - libcontainer container 4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b. Jun 25 18:45:06.522824 containerd[1720]: time="2024-06-25T18:45:06.522263917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6vc7z,Uid:8d0c755f-a333-42d5-ab24-12d469b6f5b0,Namespace:kube-system,Attempt:1,} returns sandbox id \"ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6\"" Jun 25 18:45:06.527452 containerd[1720]: time="2024-06-25T18:45:06.527405963Z" level=info msg="CreateContainer within sandbox \"ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:06.567183 containerd[1720]: time="2024-06-25T18:45:06.566902883Z" level=info msg="CreateContainer within sandbox \"ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e30ed5ee24d035d6e17d4419e151bbe7d5f670d8f028adc293a4b73b66cd6c0\"" Jun 25 18:45:06.572377 containerd[1720]: time="2024-06-25T18:45:06.570601888Z" level=info msg="StartContainer for \"0e30ed5ee24d035d6e17d4419e151bbe7d5f670d8f028adc293a4b73b66cd6c0\"" Jun 25 18:45:06.582340 containerd[1720]: time="2024-06-25T18:45:06.582302920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c556964b9-mjpkm,Uid:178b8bc1-50ba-42a1-963c-23aa14c7c0d5,Namespace:calico-system,Attempt:1,} returns sandbox id \"4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b\"" Jun 25 18:45:06.584824 containerd[1720]: time="2024-06-25T18:45:06.584793291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:45:06.608966 systemd[1]: Started cri-containerd-0e30ed5ee24d035d6e17d4419e151bbe7d5f670d8f028adc293a4b73b66cd6c0.scope - libcontainer container 0e30ed5ee24d035d6e17d4419e151bbe7d5f670d8f028adc293a4b73b66cd6c0. Jun 25 18:45:06.637833 containerd[1720]: time="2024-06-25T18:45:06.637788295Z" level=info msg="StartContainer for \"0e30ed5ee24d035d6e17d4419e151bbe7d5f670d8f028adc293a4b73b66cd6c0\" returns successfully" Jun 25 18:45:07.082814 kubelet[3151]: I0625 18:45:07.082606 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6vc7z" podStartSLOduration=40.081339682 podCreationTimestamp="2024-06-25 18:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:07.08058686 +0000 UTC m=+53.775105437" watchObservedRunningTime="2024-06-25 18:45:07.081339682 +0000 UTC m=+53.775858359" Jun 25 18:45:07.127538 systemd-networkd[1509]: calic3c4081628e: Gained IPv6LL Jun 25 18:45:07.511532 systemd-networkd[1509]: cali80c2ce637eb: Gained IPv6LL Jun 25 18:45:07.639617 systemd-networkd[1509]: cali2a7b402751f: Gained IPv6LL Jun 25 18:45:07.870255 containerd[1720]: time="2024-06-25T18:45:07.869316742Z" level=info msg="StopPodSandbox for \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\"" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.930 [INFO][4843] k8s.go 608: Cleaning up netns ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.930 [INFO][4843] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" iface="eth0" netns="/var/run/netns/cni-d0eb6874-c342-8e42-34c7-8f39eb0ca5ea" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.931 [INFO][4843] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" iface="eth0" netns="/var/run/netns/cni-d0eb6874-c342-8e42-34c7-8f39eb0ca5ea" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.932 [INFO][4843] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" iface="eth0" netns="/var/run/netns/cni-d0eb6874-c342-8e42-34c7-8f39eb0ca5ea" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.932 [INFO][4843] k8s.go 615: Releasing IP address(es) ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.932 [INFO][4843] utils.go 188: Calico CNI releasing IP address ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.956 [INFO][4849] ipam_plugin.go 411: Releasing address using handleID ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.956 [INFO][4849] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.956 [INFO][4849] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.962 [WARNING][4849] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.962 [INFO][4849] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.965 [INFO][4849] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:07.972599 containerd[1720]: 2024-06-25 18:45:07.969 [INFO][4843] k8s.go 621: Teardown processing complete. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:07.975055 containerd[1720]: time="2024-06-25T18:45:07.974082315Z" level=info msg="TearDown network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\" successfully" Jun 25 18:45:07.975055 containerd[1720]: time="2024-06-25T18:45:07.974124116Z" level=info msg="StopPodSandbox for \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\" returns successfully" Jun 25 18:45:07.979857 systemd[1]: run-netns-cni\x2dd0eb6874\x2dc342\x2d8e42\x2d34c7\x2d8f39eb0ca5ea.mount: Deactivated successfully. Jun 25 18:45:07.983995 containerd[1720]: time="2024-06-25T18:45:07.983570184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dgqjn,Uid:21944943-4c4a-467a-8098-f49bb7649567,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:08.171597 systemd-networkd[1509]: cali55310d10d11: Link UP Jun 25 18:45:08.174286 systemd-networkd[1509]: cali55310d10d11: Gained carrier Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.098 [INFO][4859] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0 csi-node-driver- calico-system 21944943-4c4a-467a-8098-f49bb7649567 763 0 2024-06-25 18:44:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.0.0-a-7f29c71dfa csi-node-driver-dgqjn eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali55310d10d11 [] []}} ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Namespace="calico-system" Pod="csi-node-driver-dgqjn" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.098 [INFO][4859] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Namespace="calico-system" Pod="csi-node-driver-dgqjn" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.132 [INFO][4866] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" HandleID="k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.140 [INFO][4866] ipam_plugin.go 264: Auto assigning IP ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" HandleID="k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00058fe30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-7f29c71dfa", "pod":"csi-node-driver-dgqjn", "timestamp":"2024-06-25 18:45:08.13282592 +0000 UTC"}, Hostname:"ci-4012.0.0-a-7f29c71dfa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.140 [INFO][4866] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.140 [INFO][4866] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.140 [INFO][4866] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-7f29c71dfa' Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.141 [INFO][4866] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.145 [INFO][4866] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.148 [INFO][4866] ipam.go 489: Trying affinity for 192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.150 [INFO][4866] ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.152 [INFO][4866] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.152 [INFO][4866] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.154 [INFO][4866] ipam.go 1685: Creating new handle: k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.158 [INFO][4866] ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.162 [INFO][4866] ipam.go 1216: Successfully claimed IPs: [192.168.103.196/26] block=192.168.103.192/26 handle="k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.162 [INFO][4866] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.196/26] handle="k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.162 [INFO][4866] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:08.193773 containerd[1720]: 2024-06-25 18:45:08.162 [INFO][4866] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.103.196/26] IPv6=[] ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" HandleID="k8s-pod-network.4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:08.195736 containerd[1720]: 2024-06-25 18:45:08.165 [INFO][4859] k8s.go 386: Populated endpoint ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Namespace="calico-system" Pod="csi-node-driver-dgqjn" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21944943-4c4a-467a-8098-f49bb7649567", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"", Pod:"csi-node-driver-dgqjn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali55310d10d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:08.195736 containerd[1720]: 2024-06-25 18:45:08.165 [INFO][4859] k8s.go 387: Calico CNI using IPs: [192.168.103.196/32] ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Namespace="calico-system" Pod="csi-node-driver-dgqjn" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:08.195736 containerd[1720]: 2024-06-25 18:45:08.165 [INFO][4859] dataplane_linux.go 68: Setting the host side veth name to cali55310d10d11 ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Namespace="calico-system" Pod="csi-node-driver-dgqjn" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:08.195736 containerd[1720]: 2024-06-25 18:45:08.175 [INFO][4859] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Namespace="calico-system" Pod="csi-node-driver-dgqjn" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:08.195736 containerd[1720]: 2024-06-25 18:45:08.176 [INFO][4859] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Namespace="calico-system" Pod="csi-node-driver-dgqjn" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21944943-4c4a-467a-8098-f49bb7649567", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef", Pod:"csi-node-driver-dgqjn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali55310d10d11", MAC:"82:5e:67:ba:71:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:08.195736 containerd[1720]: 2024-06-25 18:45:08.187 [INFO][4859] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef" Namespace="calico-system" Pod="csi-node-driver-dgqjn" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:08.224864 containerd[1720]: time="2024-06-25T18:45:08.224683026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:08.224864 containerd[1720]: time="2024-06-25T18:45:08.224798430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:08.225664 containerd[1720]: time="2024-06-25T18:45:08.225463148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:08.225664 containerd[1720]: time="2024-06-25T18:45:08.225515550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:08.253540 systemd[1]: Started cri-containerd-4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef.scope - libcontainer container 4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef. Jun 25 18:45:08.291729 containerd[1720]: time="2024-06-25T18:45:08.290940307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dgqjn,Uid:21944943-4c4a-467a-8098-f49bb7649567,Namespace:calico-system,Attempt:1,} returns sandbox id \"4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef\"" Jun 25 18:45:10.087996 containerd[1720]: time="2024-06-25T18:45:10.087950500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:10.091367 containerd[1720]: time="2024-06-25T18:45:10.091262994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:45:10.096851 containerd[1720]: time="2024-06-25T18:45:10.096780451Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:10.101547 containerd[1720]: time="2024-06-25T18:45:10.101494885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:10.102383 containerd[1720]: time="2024-06-25T18:45:10.102198305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.517243209s" Jun 25 18:45:10.102383 containerd[1720]: time="2024-06-25T18:45:10.102241106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:45:10.103223 containerd[1720]: time="2024-06-25T18:45:10.103194433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:45:10.126198 containerd[1720]: time="2024-06-25T18:45:10.125263159Z" level=info msg="CreateContainer within sandbox \"4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:45:10.172360 containerd[1720]: time="2024-06-25T18:45:10.172311094Z" level=info msg="CreateContainer within sandbox \"4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6e6c7c61022c1f9bc46e535a60ab67eb2d8444835af736a102f081b8642b843a\"" Jun 25 18:45:10.173481 containerd[1720]: time="2024-06-25T18:45:10.172836409Z" level=info msg="StartContainer for \"6e6c7c61022c1f9bc46e535a60ab67eb2d8444835af736a102f081b8642b843a\"" Jun 25 18:45:10.199475 systemd-networkd[1509]: cali55310d10d11: Gained IPv6LL Jun 25 18:45:10.210534 systemd[1]: Started cri-containerd-6e6c7c61022c1f9bc46e535a60ab67eb2d8444835af736a102f081b8642b843a.scope - libcontainer container 6e6c7c61022c1f9bc46e535a60ab67eb2d8444835af736a102f081b8642b843a. Jun 25 18:45:10.254101 containerd[1720]: time="2024-06-25T18:45:10.253745205Z" level=info msg="StartContainer for \"6e6c7c61022c1f9bc46e535a60ab67eb2d8444835af736a102f081b8642b843a\" returns successfully" Jun 25 18:45:11.104533 kubelet[3151]: I0625 18:45:11.104443 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c556964b9-mjpkm" podStartSLOduration=33.585531113 podCreationTimestamp="2024-06-25 18:44:34 +0000 UTC" firstStartedPulling="2024-06-25 18:45:06.584271176 +0000 UTC m=+53.278789853" lastFinishedPulling="2024-06-25 18:45:10.102972727 +0000 UTC m=+56.797491304" observedRunningTime="2024-06-25 18:45:11.104220164 +0000 UTC m=+57.798738841" watchObservedRunningTime="2024-06-25 18:45:11.104232564 +0000 UTC m=+57.798751441" Jun 25 18:45:12.061853 containerd[1720]: time="2024-06-25T18:45:12.061798727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:12.063760 containerd[1720]: time="2024-06-25T18:45:12.063692617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:45:12.079789 containerd[1720]: time="2024-06-25T18:45:12.079728731Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:12.086985 containerd[1720]: time="2024-06-25T18:45:12.086913092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:12.088049 containerd[1720]: time="2024-06-25T18:45:12.087851487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.984615653s" Jun 25 18:45:12.088049 containerd[1720]: time="2024-06-25T18:45:12.087899887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:45:12.091086 containerd[1720]: time="2024-06-25T18:45:12.091049870Z" level=info msg="CreateContainer within sandbox \"4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:45:12.129966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664196407.mount: Deactivated successfully. Jun 25 18:45:12.144438 containerd[1720]: time="2024-06-25T18:45:12.144329184Z" level=info msg="CreateContainer within sandbox \"4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1d2fbc0fafcd3c7a02a5deab43cb4eb0ee6cbd7e7ec6458045ad7e435a756d6c\"" Jun 25 18:45:12.145073 containerd[1720]: time="2024-06-25T18:45:12.144918381Z" level=info msg="StartContainer for \"1d2fbc0fafcd3c7a02a5deab43cb4eb0ee6cbd7e7ec6458045ad7e435a756d6c\"" Jun 25 18:45:12.180516 systemd[1]: Started cri-containerd-1d2fbc0fafcd3c7a02a5deab43cb4eb0ee6cbd7e7ec6458045ad7e435a756d6c.scope - libcontainer container 1d2fbc0fafcd3c7a02a5deab43cb4eb0ee6cbd7e7ec6458045ad7e435a756d6c. Jun 25 18:45:12.210457 containerd[1720]: time="2024-06-25T18:45:12.210402430Z" level=info msg="StartContainer for \"1d2fbc0fafcd3c7a02a5deab43cb4eb0ee6cbd7e7ec6458045ad7e435a756d6c\" returns successfully" Jun 25 18:45:12.212303 containerd[1720]: time="2024-06-25T18:45:12.211833222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:45:13.893300 containerd[1720]: time="2024-06-25T18:45:13.893253898Z" level=info msg="StopPodSandbox for \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\"" Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.926 [WARNING][5051] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8d0c755f-a333-42d5-ab24-12d469b6f5b0", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6", Pod:"coredns-5dd5756b68-6vc7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a7b402751f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.927 [INFO][5051] k8s.go 608: Cleaning up netns ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.927 [INFO][5051] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" iface="eth0" netns="" Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.927 [INFO][5051] k8s.go 615: Releasing IP address(es) ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.927 [INFO][5051] utils.go 188: Calico CNI releasing IP address ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.946 [INFO][5058] ipam_plugin.go 411: Releasing address using handleID ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.946 [INFO][5058] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.946 [INFO][5058] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.952 [WARNING][5058] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.952 [INFO][5058] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.954 [INFO][5058] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:13.956406 containerd[1720]: 2024-06-25 18:45:13.955 [INFO][5051] k8s.go 621: Teardown processing complete. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:13.957117 containerd[1720]: time="2024-06-25T18:45:13.956439726Z" level=info msg="TearDown network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\" successfully" Jun 25 18:45:13.957117 containerd[1720]: time="2024-06-25T18:45:13.956469727Z" level=info msg="StopPodSandbox for \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\" returns successfully" Jun 25 18:45:13.957219 containerd[1720]: time="2024-06-25T18:45:13.957146143Z" level=info msg="RemovePodSandbox for \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\"" Jun 25 18:45:13.957219 containerd[1720]: time="2024-06-25T18:45:13.957182744Z" level=info msg="Forcibly stopping sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\"" Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.008 [WARNING][5076] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8d0c755f-a333-42d5-ab24-12d469b6f5b0", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"ff746b81dc00b3dae44ffc66b9cdca0c763a6b273cc4a01884fa8f333e9294f6", Pod:"coredns-5dd5756b68-6vc7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a7b402751f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.008 [INFO][5076] k8s.go 608: Cleaning up netns ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.008 [INFO][5076] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" iface="eth0" netns="" Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.008 [INFO][5076] k8s.go 615: Releasing IP address(es) ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.008 [INFO][5076] utils.go 188: Calico CNI releasing IP address ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.028 [INFO][5083] ipam_plugin.go 411: Releasing address using handleID ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.028 [INFO][5083] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.028 [INFO][5083] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.034 [WARNING][5083] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.034 [INFO][5083] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" HandleID="k8s-pod-network.fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--6vc7z-eth0" Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.035 [INFO][5083] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:14.037852 containerd[1720]: 2024-06-25 18:45:14.036 [INFO][5076] k8s.go 621: Teardown processing complete. ContainerID="fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362" Jun 25 18:45:14.038655 containerd[1720]: time="2024-06-25T18:45:14.037893997Z" level=info msg="TearDown network for sandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\" successfully" Jun 25 18:45:14.048094 containerd[1720]: time="2024-06-25T18:45:14.048048242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:14.048216 containerd[1720]: time="2024-06-25T18:45:14.048125444Z" level=info msg="RemovePodSandbox \"fec4f0c6346dc1bf3ead1856dcf64bf8c2ea6952021ff8eb2a1b5cece00f2362\" returns successfully" Jun 25 18:45:14.048800 containerd[1720]: time="2024-06-25T18:45:14.048762760Z" level=info msg="StopPodSandbox for \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\"" Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.081 [WARNING][5101] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0", GenerateName:"calico-kube-controllers-6c556964b9-", Namespace:"calico-system", SelfLink:"", UID:"178b8bc1-50ba-42a1-963c-23aa14c7c0d5", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c556964b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b", Pod:"calico-kube-controllers-6c556964b9-mjpkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80c2ce637eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.081 [INFO][5101] k8s.go 608: Cleaning up netns ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.081 [INFO][5101] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" iface="eth0" netns="" Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.081 [INFO][5101] k8s.go 615: Releasing IP address(es) ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.081 [INFO][5101] utils.go 188: Calico CNI releasing IP address ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.102 [INFO][5107] ipam_plugin.go 411: Releasing address using handleID ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.102 [INFO][5107] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.103 [INFO][5107] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.109 [WARNING][5107] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.109 [INFO][5107] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.111 [INFO][5107] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:14.116382 containerd[1720]: 2024-06-25 18:45:14.113 [INFO][5101] k8s.go 621: Teardown processing complete. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:14.116382 containerd[1720]: time="2024-06-25T18:45:14.116173590Z" level=info msg="TearDown network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\" successfully" Jun 25 18:45:14.116382 containerd[1720]: time="2024-06-25T18:45:14.116248492Z" level=info msg="StopPodSandbox for \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\" returns successfully" Jun 25 18:45:14.117111 containerd[1720]: time="2024-06-25T18:45:14.116586100Z" level=info msg="RemovePodSandbox for \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\"" Jun 25 18:45:14.117111 containerd[1720]: time="2024-06-25T18:45:14.116616101Z" level=info msg="Forcibly stopping sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\"" Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.155 [WARNING][5128] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0", GenerateName:"calico-kube-controllers-6c556964b9-", Namespace:"calico-system", SelfLink:"", UID:"178b8bc1-50ba-42a1-963c-23aa14c7c0d5", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c556964b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"4728e362ae4027d42ed1f0829bf6fe4ba96dcbf0aea5eadccda54cd5a59f2d9b", Pod:"calico-kube-controllers-6c556964b9-mjpkm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80c2ce637eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.156 [INFO][5128] k8s.go 608: Cleaning up netns ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.156 [INFO][5128] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" iface="eth0" netns="" Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.156 [INFO][5128] k8s.go 615: Releasing IP address(es) ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.156 [INFO][5128] utils.go 188: Calico CNI releasing IP address ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.175 [INFO][5134] ipam_plugin.go 411: Releasing address using handleID ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.175 [INFO][5134] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.175 [INFO][5134] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.180 [WARNING][5134] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.180 [INFO][5134] ipam_plugin.go 439: Releasing address using workloadID ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" HandleID="k8s-pod-network.cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--kube--controllers--6c556964b9--mjpkm-eth0" Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.181 [INFO][5134] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:14.183400 containerd[1720]: 2024-06-25 18:45:14.182 [INFO][5128] k8s.go 621: Teardown processing complete. ContainerID="cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3" Jun 25 18:45:14.183400 containerd[1720]: time="2024-06-25T18:45:14.183319514Z" level=info msg="TearDown network for sandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\" successfully" Jun 25 18:45:14.192584 containerd[1720]: time="2024-06-25T18:45:14.192545638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:14.192717 containerd[1720]: time="2024-06-25T18:45:14.192615939Z" level=info msg="RemovePodSandbox \"cf1983e924f45d0a44104a8a28c31eaeb4373bf053538a0d3ba5988a70427da3\" returns successfully" Jun 25 18:45:14.193229 containerd[1720]: time="2024-06-25T18:45:14.193199453Z" level=info msg="StopPodSandbox for \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\"" Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.227 [WARNING][5152] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21944943-4c4a-467a-8098-f49bb7649567", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef", Pod:"csi-node-driver-dgqjn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali55310d10d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.227 [INFO][5152] k8s.go 608: Cleaning up netns ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.227 [INFO][5152] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" iface="eth0" netns="" Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.227 [INFO][5152] k8s.go 615: Releasing IP address(es) ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.227 [INFO][5152] utils.go 188: Calico CNI releasing IP address ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.248 [INFO][5159] ipam_plugin.go 411: Releasing address using handleID ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.249 [INFO][5159] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.249 [INFO][5159] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.254 [WARNING][5159] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.254 [INFO][5159] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.256 [INFO][5159] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:14.257979 containerd[1720]: 2024-06-25 18:45:14.257 [INFO][5152] k8s.go 621: Teardown processing complete. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:14.258749 containerd[1720]: time="2024-06-25T18:45:14.257985021Z" level=info msg="TearDown network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\" successfully" Jun 25 18:45:14.258749 containerd[1720]: time="2024-06-25T18:45:14.258015821Z" level=info msg="StopPodSandbox for \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\" returns successfully" Jun 25 18:45:14.258749 containerd[1720]: time="2024-06-25T18:45:14.258556934Z" level=info msg="RemovePodSandbox for \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\"" Jun 25 18:45:14.258749 containerd[1720]: time="2024-06-25T18:45:14.258590835Z" level=info msg="Forcibly stopping sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\"" Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.299 [WARNING][5177] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21944943-4c4a-467a-8098-f49bb7649567", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef", Pod:"csi-node-driver-dgqjn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali55310d10d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.299 [INFO][5177] k8s.go 608: Cleaning up netns ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.299 [INFO][5177] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" iface="eth0" netns="" Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.299 [INFO][5177] k8s.go 615: Releasing IP address(es) ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.299 [INFO][5177] utils.go 188: Calico CNI releasing IP address ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.318 [INFO][5183] ipam_plugin.go 411: Releasing address using handleID ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.318 [INFO][5183] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.318 [INFO][5183] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.323 [WARNING][5183] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.323 [INFO][5183] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" HandleID="k8s-pod-network.6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-csi--node--driver--dgqjn-eth0" Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.324 [INFO][5183] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:14.326629 containerd[1720]: 2024-06-25 18:45:14.325 [INFO][5177] k8s.go 621: Teardown processing complete. ContainerID="6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e" Jun 25 18:45:14.327275 containerd[1720]: time="2024-06-25T18:45:14.326645382Z" level=info msg="TearDown network for sandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\" successfully" Jun 25 18:45:14.332469 containerd[1720]: time="2024-06-25T18:45:14.332428121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:14.332591 containerd[1720]: time="2024-06-25T18:45:14.332487123Z" level=info msg="RemovePodSandbox \"6dfe0c735648e56c4ef82c088a65a2983374d23d7ad58abd6e3ab488b6c4310e\" returns successfully" Jun 25 18:45:14.333122 containerd[1720]: time="2024-06-25T18:45:14.333084737Z" level=info msg="StopPodSandbox for \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\"" Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.370 [WARNING][5202] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2b332540-db59-490d-9479-ddef50c50dc9", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895", Pod:"coredns-5dd5756b68-lk7gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3c4081628e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.370 [INFO][5202] k8s.go 608: Cleaning up netns ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.370 [INFO][5202] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" iface="eth0" netns="" Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.370 [INFO][5202] k8s.go 615: Releasing IP address(es) ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.370 [INFO][5202] utils.go 188: Calico CNI releasing IP address ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.389 [INFO][5208] ipam_plugin.go 411: Releasing address using handleID ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.391 [INFO][5208] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.391 [INFO][5208] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.396 [WARNING][5208] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.396 [INFO][5208] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.397 [INFO][5208] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:14.399572 containerd[1720]: 2024-06-25 18:45:14.398 [INFO][5202] k8s.go 621: Teardown processing complete. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:14.400401 containerd[1720]: time="2024-06-25T18:45:14.399617347Z" level=info msg="TearDown network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\" successfully" Jun 25 18:45:14.400401 containerd[1720]: time="2024-06-25T18:45:14.399646247Z" level=info msg="StopPodSandbox for \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\" returns successfully" Jun 25 18:45:14.400401 containerd[1720]: time="2024-06-25T18:45:14.400290663Z" level=info msg="RemovePodSandbox for \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\"" Jun 25 18:45:14.400401 containerd[1720]: time="2024-06-25T18:45:14.400327964Z" level=info msg="Forcibly stopping sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\"" Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.451 [WARNING][5226] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2b332540-db59-490d-9479-ddef50c50dc9", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"1dad6ca6d747e05234680f5b88623ab83ea4e3e788735108d5fada3ce355d895", Pod:"coredns-5dd5756b68-lk7gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3c4081628e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.452 [INFO][5226] k8s.go 608: Cleaning up netns ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.452 [INFO][5226] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" iface="eth0" netns="" Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.452 [INFO][5226] k8s.go 615: Releasing IP address(es) ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.452 [INFO][5226] utils.go 188: Calico CNI releasing IP address ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.498 [INFO][5236] ipam_plugin.go 411: Releasing address using handleID ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.499 [INFO][5236] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.499 [INFO][5236] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.507 [WARNING][5236] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.507 [INFO][5236] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" HandleID="k8s-pod-network.d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-coredns--5dd5756b68--lk7gj-eth0" Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.509 [INFO][5236] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:14.512428 containerd[1720]: 2024-06-25 18:45:14.511 [INFO][5226] k8s.go 621: Teardown processing complete. ContainerID="d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd" Jun 25 18:45:14.513063 containerd[1720]: time="2024-06-25T18:45:14.512482077Z" level=info msg="TearDown network for sandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\" successfully" Jun 25 18:45:14.520288 containerd[1720]: time="2024-06-25T18:45:14.519893156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:14.520597 containerd[1720]: time="2024-06-25T18:45:14.520471770Z" level=info msg="RemovePodSandbox \"d1a0e57a7a4d88d579f7514e3ac37209153c36f94f2532c7b5ea2006b69380bd\" returns successfully" Jun 25 18:45:14.635656 containerd[1720]: time="2024-06-25T18:45:14.635591655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.638571 containerd[1720]: time="2024-06-25T18:45:14.638501225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:45:14.642460 containerd[1720]: time="2024-06-25T18:45:14.642403620Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.654212 containerd[1720]: time="2024-06-25T18:45:14.654141104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.654992 containerd[1720]: time="2024-06-25T18:45:14.654850021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.442975898s" Jun 25 18:45:14.654992 containerd[1720]: time="2024-06-25T18:45:14.654891322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:45:14.657120 containerd[1720]: time="2024-06-25T18:45:14.656952272Z" level=info msg="CreateContainer within sandbox \"4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:45:14.688718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030562173.mount: Deactivated successfully. Jun 25 18:45:14.693407 containerd[1720]: time="2024-06-25T18:45:14.693370653Z" level=info msg="CreateContainer within sandbox \"4187d5a429d5798172357f4f30b645a4808b69b177dd1d67518d73b1fa5db3ef\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b3e4a73f9b3129813c25bf2a3e1589e878458f6169a026b28c0d9d63ba3e01b1\"" Jun 25 18:45:14.695342 containerd[1720]: time="2024-06-25T18:45:14.693817863Z" level=info msg="StartContainer for \"b3e4a73f9b3129813c25bf2a3e1589e878458f6169a026b28c0d9d63ba3e01b1\"" Jun 25 18:45:14.733486 systemd[1]: Started cri-containerd-b3e4a73f9b3129813c25bf2a3e1589e878458f6169a026b28c0d9d63ba3e01b1.scope - libcontainer container b3e4a73f9b3129813c25bf2a3e1589e878458f6169a026b28c0d9d63ba3e01b1. Jun 25 18:45:14.772499 containerd[1720]: time="2024-06-25T18:45:14.772258461Z" level=info msg="StartContainer for \"b3e4a73f9b3129813c25bf2a3e1589e878458f6169a026b28c0d9d63ba3e01b1\" returns successfully" Jun 25 18:45:15.001255 kubelet[3151]: I0625 18:45:15.001178 3151 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:45:15.001255 kubelet[3151]: I0625 18:45:15.001236 3151 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:45:15.118879 kubelet[3151]: I0625 18:45:15.118735 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-dgqjn" podStartSLOduration=34.756086066 podCreationTimestamp="2024-06-25 18:44:34 +0000 UTC" firstStartedPulling="2024-06-25 18:45:08.292714157 +0000 UTC m=+54.987232734" lastFinishedPulling="2024-06-25 18:45:14.655316032 +0000 UTC m=+61.349834609" observedRunningTime="2024-06-25 18:45:15.116299183 +0000 UTC m=+61.810817760" watchObservedRunningTime="2024-06-25 18:45:15.118687941 +0000 UTC m=+61.813206618" Jun 25 18:45:29.047567 kubelet[3151]: I0625 18:45:29.047519 3151 topology_manager.go:215] "Topology Admit Handler" podUID="5fd0f7f0-e580-4133-97c2-b213b91df04c" podNamespace="calico-apiserver" podName="calico-apiserver-666c56b4c6-ljnzv" Jun 25 18:45:29.057459 systemd[1]: Created slice kubepods-besteffort-pod5fd0f7f0_e580_4133_97c2_b213b91df04c.slice - libcontainer container kubepods-besteffort-pod5fd0f7f0_e580_4133_97c2_b213b91df04c.slice. Jun 25 18:45:29.071381 kubelet[3151]: I0625 18:45:29.071007 3151 topology_manager.go:215] "Topology Admit Handler" podUID="f7655d22-2677-4495-a796-a69bfe121d1c" podNamespace="calico-apiserver" podName="calico-apiserver-666c56b4c6-6qq4k" Jun 25 18:45:29.083296 systemd[1]: Created slice kubepods-besteffort-podf7655d22_2677_4495_a796_a69bfe121d1c.slice - libcontainer container kubepods-besteffort-podf7655d22_2677_4495_a796_a69bfe121d1c.slice. Jun 25 18:45:29.102545 kubelet[3151]: I0625 18:45:29.102383 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5fd0f7f0-e580-4133-97c2-b213b91df04c-calico-apiserver-certs\") pod \"calico-apiserver-666c56b4c6-ljnzv\" (UID: \"5fd0f7f0-e580-4133-97c2-b213b91df04c\") " pod="calico-apiserver/calico-apiserver-666c56b4c6-ljnzv" Jun 25 18:45:29.102545 kubelet[3151]: I0625 18:45:29.102444 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5frp6\" (UniqueName: \"kubernetes.io/projected/5fd0f7f0-e580-4133-97c2-b213b91df04c-kube-api-access-5frp6\") pod \"calico-apiserver-666c56b4c6-ljnzv\" (UID: \"5fd0f7f0-e580-4133-97c2-b213b91df04c\") " pod="calico-apiserver/calico-apiserver-666c56b4c6-ljnzv" Jun 25 18:45:29.204228 kubelet[3151]: I0625 18:45:29.203731 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdxg\" (UniqueName: \"kubernetes.io/projected/f7655d22-2677-4495-a796-a69bfe121d1c-kube-api-access-czdxg\") pod \"calico-apiserver-666c56b4c6-6qq4k\" (UID: \"f7655d22-2677-4495-a796-a69bfe121d1c\") " pod="calico-apiserver/calico-apiserver-666c56b4c6-6qq4k" Jun 25 18:45:29.204228 kubelet[3151]: I0625 18:45:29.203822 3151 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f7655d22-2677-4495-a796-a69bfe121d1c-calico-apiserver-certs\") pod \"calico-apiserver-666c56b4c6-6qq4k\" (UID: \"f7655d22-2677-4495-a796-a69bfe121d1c\") " pod="calico-apiserver/calico-apiserver-666c56b4c6-6qq4k" Jun 25 18:45:29.365055 containerd[1720]: time="2024-06-25T18:45:29.364389124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666c56b4c6-ljnzv,Uid:5fd0f7f0-e580-4133-97c2-b213b91df04c,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:45:29.389761 containerd[1720]: time="2024-06-25T18:45:29.388978878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666c56b4c6-6qq4k,Uid:f7655d22-2677-4495-a796-a69bfe121d1c,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:45:29.523309 systemd-networkd[1509]: cali2116b7da6e7: Link UP Jun 25 18:45:29.523576 systemd-networkd[1509]: cali2116b7da6e7: Gained carrier Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.423 [INFO][5345] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0 calico-apiserver-666c56b4c6- calico-apiserver 5fd0f7f0-e580-4133-97c2-b213b91df04c 878 0 2024-06-25 18:45:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:666c56b4c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-7f29c71dfa calico-apiserver-666c56b4c6-ljnzv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2116b7da6e7 [] []}} ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-ljnzv" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.423 [INFO][5345] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-ljnzv" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.466 [INFO][5357] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" HandleID="k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.482 [INFO][5357] ipam_plugin.go 264: Auto assigning IP ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" HandleID="k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edcc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-7f29c71dfa", "pod":"calico-apiserver-666c56b4c6-ljnzv", "timestamp":"2024-06-25 18:45:29.466962917 +0000 UTC"}, Hostname:"ci-4012.0.0-a-7f29c71dfa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.482 [INFO][5357] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.482 [INFO][5357] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.482 [INFO][5357] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-7f29c71dfa' Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.484 [INFO][5357] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.492 [INFO][5357] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.497 [INFO][5357] ipam.go 489: Trying affinity for 192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.499 [INFO][5357] ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.502 [INFO][5357] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.502 [INFO][5357] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.504 [INFO][5357] ipam.go 1685: Creating new handle: k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.508 [INFO][5357] ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.516 [INFO][5357] ipam.go 1216: Successfully claimed IPs: [192.168.103.197/26] block=192.168.103.192/26 handle="k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.516 [INFO][5357] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.197/26] handle="k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.516 [INFO][5357] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:29.552760 containerd[1720]: 2024-06-25 18:45:29.516 [INFO][5357] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.103.197/26] IPv6=[] ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" HandleID="k8s-pod-network.47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" Jun 25 18:45:29.553895 containerd[1720]: 2024-06-25 18:45:29.518 [INFO][5345] k8s.go 386: Populated endpoint ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-ljnzv" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0", GenerateName:"calico-apiserver-666c56b4c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5fd0f7f0-e580-4133-97c2-b213b91df04c", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666c56b4c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"", Pod:"calico-apiserver-666c56b4c6-ljnzv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2116b7da6e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.553895 containerd[1720]: 2024-06-25 18:45:29.519 [INFO][5345] k8s.go 387: Calico CNI using IPs: [192.168.103.197/32] ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-ljnzv" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" Jun 25 18:45:29.553895 containerd[1720]: 2024-06-25 18:45:29.519 [INFO][5345] dataplane_linux.go 68: Setting the host side veth name to cali2116b7da6e7 ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-ljnzv" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" Jun 25 18:45:29.553895 containerd[1720]: 2024-06-25 18:45:29.525 [INFO][5345] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-ljnzv" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" Jun 25 18:45:29.553895 containerd[1720]: 2024-06-25 18:45:29.525 [INFO][5345] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-ljnzv" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0", GenerateName:"calico-apiserver-666c56b4c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"5fd0f7f0-e580-4133-97c2-b213b91df04c", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666c56b4c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c", Pod:"calico-apiserver-666c56b4c6-ljnzv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2116b7da6e7", MAC:"72:54:2a:dc:15:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.553895 containerd[1720]: 2024-06-25 18:45:29.546 [INFO][5345] k8s.go 500: Wrote updated endpoint to datastore ContainerID="47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-ljnzv" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--ljnzv-eth0" Jun 25 18:45:29.602590 containerd[1720]: time="2024-06-25T18:45:29.602209217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:29.602590 containerd[1720]: time="2024-06-25T18:45:29.602280216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.602590 containerd[1720]: time="2024-06-25T18:45:29.602314416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:29.602590 containerd[1720]: time="2024-06-25T18:45:29.602334616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.628204 systemd-networkd[1509]: cali99b3808367d: Link UP Jun 25 18:45:29.629750 systemd-networkd[1509]: cali99b3808367d: Gained carrier Jun 25 18:45:29.646077 systemd[1]: Started cri-containerd-47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c.scope - libcontainer container 47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c. Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.486 [INFO][5356] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0 calico-apiserver-666c56b4c6- calico-apiserver f7655d22-2677-4495-a796-a69bfe121d1c 880 0 2024-06-25 18:45:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:666c56b4c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-7f29c71dfa calico-apiserver-666c56b4c6-6qq4k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali99b3808367d [] []}} ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-6qq4k" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.487 [INFO][5356] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-6qq4k" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.531 [INFO][5377] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" HandleID="k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.550 [INFO][5377] ipam_plugin.go 264: Auto assigning IP ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" HandleID="k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a3e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-7f29c71dfa", "pod":"calico-apiserver-666c56b4c6-6qq4k", "timestamp":"2024-06-25 18:45:29.531177637 +0000 UTC"}, Hostname:"ci-4012.0.0-a-7f29c71dfa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.551 [INFO][5377] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.551 [INFO][5377] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.551 [INFO][5377] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-7f29c71dfa' Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.555 [INFO][5377] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.566 [INFO][5377] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.573 [INFO][5377] ipam.go 489: Trying affinity for 192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.575 [INFO][5377] ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.581 [INFO][5377] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.585 [INFO][5377] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.590 [INFO][5377] ipam.go 1685: Creating new handle: k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637 Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.597 [INFO][5377] ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.606 [INFO][5377] ipam.go 1216: Successfully claimed IPs: [192.168.103.198/26] block=192.168.103.192/26 handle="k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.607 [INFO][5377] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.198/26] handle="k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" host="ci-4012.0.0-a-7f29c71dfa" Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.607 [INFO][5377] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:29.660193 containerd[1720]: 2024-06-25 18:45:29.607 [INFO][5377] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.103.198/26] IPv6=[] ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" HandleID="k8s-pod-network.90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Workload="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" Jun 25 18:45:29.661099 containerd[1720]: 2024-06-25 18:45:29.617 [INFO][5356] k8s.go 386: Populated endpoint ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-6qq4k" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0", GenerateName:"calico-apiserver-666c56b4c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7655d22-2677-4495-a796-a69bfe121d1c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666c56b4c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"", Pod:"calico-apiserver-666c56b4c6-6qq4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99b3808367d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.661099 containerd[1720]: 2024-06-25 18:45:29.617 [INFO][5356] k8s.go 387: Calico CNI using IPs: [192.168.103.198/32] ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-6qq4k" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" Jun 25 18:45:29.661099 containerd[1720]: 2024-06-25 18:45:29.617 [INFO][5356] dataplane_linux.go 68: Setting the host side veth name to cali99b3808367d ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-6qq4k" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" Jun 25 18:45:29.661099 containerd[1720]: 2024-06-25 18:45:29.629 [INFO][5356] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-6qq4k" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" Jun 25 18:45:29.661099 containerd[1720]: 2024-06-25 18:45:29.632 [INFO][5356] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-6qq4k" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0", GenerateName:"calico-apiserver-666c56b4c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7655d22-2677-4495-a796-a69bfe121d1c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"666c56b4c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-7f29c71dfa", ContainerID:"90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637", Pod:"calico-apiserver-666c56b4c6-6qq4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99b3808367d", MAC:"f6:3e:9f:28:65:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.661099 containerd[1720]: 2024-06-25 18:45:29.657 [INFO][5356] k8s.go 500: Wrote updated endpoint to datastore ContainerID="90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637" Namespace="calico-apiserver" Pod="calico-apiserver-666c56b4c6-6qq4k" WorkloadEndpoint="ci--4012.0.0--a--7f29c71dfa-k8s-calico--apiserver--666c56b4c6--6qq4k-eth0" Jun 25 18:45:29.708266 containerd[1720]: time="2024-06-25T18:45:29.707215196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:29.708266 containerd[1720]: time="2024-06-25T18:45:29.707475094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.708266 containerd[1720]: time="2024-06-25T18:45:29.707504794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:29.708266 containerd[1720]: time="2024-06-25T18:45:29.707539194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.739812 systemd[1]: Started cri-containerd-90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637.scope - libcontainer container 90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637. Jun 25 18:45:29.769177 containerd[1720]: time="2024-06-25T18:45:29.769138420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666c56b4c6-ljnzv,Uid:5fd0f7f0-e580-4133-97c2-b213b91df04c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c\"" Jun 25 18:45:29.772201 containerd[1720]: time="2024-06-25T18:45:29.772172279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:45:29.805042 containerd[1720]: time="2024-06-25T18:45:29.805006220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-666c56b4c6-6qq4k,Uid:f7655d22-2677-4495-a796-a69bfe121d1c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637\"" Jun 25 18:45:30.679597 systemd-networkd[1509]: cali99b3808367d: Gained IPv6LL Jun 25 18:45:31.575489 systemd-networkd[1509]: cali2116b7da6e7: Gained IPv6LL Jun 25 18:45:35.105128 containerd[1720]: time="2024-06-25T18:45:35.104920657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:35.107091 containerd[1720]: time="2024-06-25T18:45:35.107026954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:45:35.111748 containerd[1720]: time="2024-06-25T18:45:35.111690547Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:35.115742 containerd[1720]: time="2024-06-25T18:45:35.115687241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:35.116556 containerd[1720]: time="2024-06-25T18:45:35.116419740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 5.343974456s" Jun 25 18:45:35.116556 containerd[1720]: time="2024-06-25T18:45:35.116457040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:45:35.117713 containerd[1720]: time="2024-06-25T18:45:35.117163539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:45:35.118984 containerd[1720]: time="2024-06-25T18:45:35.118850437Z" level=info msg="CreateContainer within sandbox \"47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:45:35.156554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065973133.mount: Deactivated successfully. Jun 25 18:45:35.163218 containerd[1720]: time="2024-06-25T18:45:35.162685873Z" level=info msg="CreateContainer within sandbox \"47e7594894dbeb60ea4c21c28aff3af10c83838a995879c397640b8eb4cf611c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"88905fab954219598c954508b5686cb05ac3cd6ac6df25dc6b96dba090c75ca3\"" Jun 25 18:45:35.163346 containerd[1720]: time="2024-06-25T18:45:35.163326673Z" level=info msg="StartContainer for \"88905fab954219598c954508b5686cb05ac3cd6ac6df25dc6b96dba090c75ca3\"" Jun 25 18:45:35.224578 systemd[1]: Started cri-containerd-88905fab954219598c954508b5686cb05ac3cd6ac6df25dc6b96dba090c75ca3.scope - libcontainer container 88905fab954219598c954508b5686cb05ac3cd6ac6df25dc6b96dba090c75ca3. Jun 25 18:45:35.292024 containerd[1720]: time="2024-06-25T18:45:35.291974886Z" level=info msg="StartContainer for \"88905fab954219598c954508b5686cb05ac3cd6ac6df25dc6b96dba090c75ca3\" returns successfully" Jun 25 18:45:35.585657 containerd[1720]: time="2024-06-25T18:45:35.585587661Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:35.588113 containerd[1720]: time="2024-06-25T18:45:35.587019959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 18:45:35.590326 containerd[1720]: time="2024-06-25T18:45:35.590293455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 473.092816ms" Jun 25 18:45:35.591363 containerd[1720]: time="2024-06-25T18:45:35.590566054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:45:35.595643 containerd[1720]: time="2024-06-25T18:45:35.595505947Z" level=info msg="CreateContainer within sandbox \"90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:45:35.638057 containerd[1720]: time="2024-06-25T18:45:35.638001686Z" level=info msg="CreateContainer within sandbox \"90a6ce6f2e11cc1f87a7744f0ebf1b1726ff8f3b9432cdb82a4314b130a5d637\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c063e46fb03a009390f4edd8cd675ccdc702a9cfa13eb550c51744c83d7be17\"" Jun 25 18:45:35.639010 containerd[1720]: time="2024-06-25T18:45:35.638974284Z" level=info msg="StartContainer for \"9c063e46fb03a009390f4edd8cd675ccdc702a9cfa13eb550c51744c83d7be17\"" Jun 25 18:45:35.679700 systemd[1]: Started cri-containerd-9c063e46fb03a009390f4edd8cd675ccdc702a9cfa13eb550c51744c83d7be17.scope - libcontainer container 9c063e46fb03a009390f4edd8cd675ccdc702a9cfa13eb550c51744c83d7be17. Jun 25 18:45:35.755159 containerd[1720]: time="2024-06-25T18:45:35.755024916Z" level=info msg="StartContainer for \"9c063e46fb03a009390f4edd8cd675ccdc702a9cfa13eb550c51744c83d7be17\" returns successfully" Jun 25 18:45:36.180239 kubelet[3151]: I0625 18:45:36.180192 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-666c56b4c6-ljnzv" podStartSLOduration=1.834019614 podCreationTimestamp="2024-06-25 18:45:29 +0000 UTC" firstStartedPulling="2024-06-25 18:45:29.770837653 +0000 UTC m=+76.465356230" lastFinishedPulling="2024-06-25 18:45:35.11696384 +0000 UTC m=+81.811482417" observedRunningTime="2024-06-25 18:45:36.177951504 +0000 UTC m=+82.872470081" watchObservedRunningTime="2024-06-25 18:45:36.180145801 +0000 UTC m=+82.874664378" Jun 25 18:45:36.433486 kubelet[3151]: I0625 18:45:36.433299 3151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-666c56b4c6-6qq4k" podStartSLOduration=1.647670124 podCreationTimestamp="2024-06-25 18:45:29 +0000 UTC" firstStartedPulling="2024-06-25 18:45:29.806111242 +0000 UTC m=+76.500629919" lastFinishedPulling="2024-06-25 18:45:35.591684053 +0000 UTC m=+82.286202630" observedRunningTime="2024-06-25 18:45:36.191389885 +0000 UTC m=+82.885908562" watchObservedRunningTime="2024-06-25 18:45:36.433242835 +0000 UTC m=+83.127761412" Jun 25 18:45:49.898933 systemd[1]: run-containerd-runc-k8s.io-6e6c7c61022c1f9bc46e535a60ab67eb2d8444835af736a102f081b8642b843a-runc.26ULaW.mount: Deactivated successfully. Jun 25 18:46:15.786654 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:55346.service - OpenSSH per-connection server daemon (10.200.16.10:55346). Jun 25 18:46:16.430529 sshd[5690]: Accepted publickey for core from 10.200.16.10 port 55346 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:16.432062 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:16.436784 systemd-logind[1695]: New session 10 of user core. Jun 25 18:46:16.441502 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:46:16.960734 sshd[5690]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:16.964624 systemd-logind[1695]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:46:16.966327 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:55346.service: Deactivated successfully. Jun 25 18:46:16.970040 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:46:16.972342 systemd-logind[1695]: Removed session 10. Jun 25 18:46:22.077654 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:55348.service - OpenSSH per-connection server daemon (10.200.16.10:55348). Jun 25 18:46:22.724433 sshd[5729]: Accepted publickey for core from 10.200.16.10 port 55348 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:22.725931 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:22.730687 systemd-logind[1695]: New session 11 of user core. Jun 25 18:46:22.740508 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:46:23.239142 sshd[5729]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:23.242902 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:55348.service: Deactivated successfully. Jun 25 18:46:23.245459 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:46:23.247115 systemd-logind[1695]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:46:23.248322 systemd-logind[1695]: Removed session 11. Jun 25 18:46:24.248597 systemd[1]: run-containerd-runc-k8s.io-196e7f8a645a74003a801794f630ca1015493bc41fab89b163c32099ef5f806c-runc.nCAxIA.mount: Deactivated successfully. Jun 25 18:46:28.359669 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:54502.service - OpenSSH per-connection server daemon (10.200.16.10:54502). Jun 25 18:46:29.022276 sshd[5769]: Accepted publickey for core from 10.200.16.10 port 54502 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:29.023846 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:29.027975 systemd-logind[1695]: New session 12 of user core. Jun 25 18:46:29.037521 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:46:29.538505 sshd[5769]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:29.542041 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:54502.service: Deactivated successfully. Jun 25 18:46:29.544707 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:46:29.546650 systemd-logind[1695]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:46:29.548140 systemd-logind[1695]: Removed session 12. Jun 25 18:46:34.658669 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:48332.service - OpenSSH per-connection server daemon (10.200.16.10:48332). Jun 25 18:46:35.300802 sshd[5785]: Accepted publickey for core from 10.200.16.10 port 48332 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:35.302258 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:35.306410 systemd-logind[1695]: New session 13 of user core. Jun 25 18:46:35.310539 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:46:35.812002 sshd[5785]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:35.815145 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:48332.service: Deactivated successfully. Jun 25 18:46:35.817799 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:46:35.819419 systemd-logind[1695]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:46:35.820761 systemd-logind[1695]: Removed session 13. Jun 25 18:46:35.927381 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:48336.service - OpenSSH per-connection server daemon (10.200.16.10:48336). Jun 25 18:46:36.574196 sshd[5804]: Accepted publickey for core from 10.200.16.10 port 48336 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:36.575755 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:36.580514 systemd-logind[1695]: New session 14 of user core. Jun 25 18:46:36.584515 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:46:37.712091 sshd[5804]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:37.715618 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:48336.service: Deactivated successfully. Jun 25 18:46:37.717860 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:46:37.719556 systemd-logind[1695]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:46:37.720673 systemd-logind[1695]: Removed session 14. Jun 25 18:46:37.829677 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:48338.service - OpenSSH per-connection server daemon (10.200.16.10:48338). Jun 25 18:46:38.467695 sshd[5827]: Accepted publickey for core from 10.200.16.10 port 48338 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:38.469189 sshd[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:38.474762 systemd-logind[1695]: New session 15 of user core. Jun 25 18:46:38.477513 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:46:38.978483 sshd[5827]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:38.982418 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:48338.service: Deactivated successfully. Jun 25 18:46:38.984536 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:46:38.985332 systemd-logind[1695]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:46:38.986594 systemd-logind[1695]: Removed session 15. Jun 25 18:46:44.100643 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:48348.service - OpenSSH per-connection server daemon (10.200.16.10:48348). Jun 25 18:46:44.750637 sshd[5840]: Accepted publickey for core from 10.200.16.10 port 48348 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:44.752489 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:44.757560 systemd-logind[1695]: New session 16 of user core. Jun 25 18:46:44.762493 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:46:45.265429 sshd[5840]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:45.271968 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:48348.service: Deactivated successfully. Jun 25 18:46:45.272020 systemd-logind[1695]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:46:45.276714 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:46:45.278729 systemd-logind[1695]: Removed session 16. Jun 25 18:46:50.388686 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:38724.service - OpenSSH per-connection server daemon (10.200.16.10:38724). Jun 25 18:46:51.033670 sshd[5883]: Accepted publickey for core from 10.200.16.10 port 38724 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:51.035257 sshd[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:51.040180 systemd-logind[1695]: New session 17 of user core. Jun 25 18:46:51.042550 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:46:51.545780 sshd[5883]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:51.550370 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:38724.service: Deactivated successfully. Jun 25 18:46:51.552841 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:46:51.553965 systemd-logind[1695]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:46:51.555147 systemd-logind[1695]: Removed session 17. Jun 25 18:46:56.667664 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:35020.service - OpenSSH per-connection server daemon (10.200.16.10:35020). Jun 25 18:46:57.311042 sshd[5922]: Accepted publickey for core from 10.200.16.10 port 35020 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:57.312858 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:57.318253 systemd-logind[1695]: New session 18 of user core. Jun 25 18:46:57.320523 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:46:57.819706 sshd[5922]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:57.823334 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:35020.service: Deactivated successfully. Jun 25 18:46:57.826378 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:46:57.828102 systemd-logind[1695]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:46:57.829293 systemd-logind[1695]: Removed session 18. Jun 25 18:46:57.938651 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:35030.service - OpenSSH per-connection server daemon (10.200.16.10:35030). Jun 25 18:46:58.580935 sshd[5935]: Accepted publickey for core from 10.200.16.10 port 35030 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:46:58.582448 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:58.587180 systemd-logind[1695]: New session 19 of user core. Jun 25 18:46:58.593538 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:46:59.252768 sshd[5935]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:59.257280 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:35030.service: Deactivated successfully. Jun 25 18:46:59.259821 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:46:59.261215 systemd-logind[1695]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:46:59.262454 systemd-logind[1695]: Removed session 19. Jun 25 18:46:59.374681 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:35042.service - OpenSSH per-connection server daemon (10.200.16.10:35042). Jun 25 18:47:00.038228 sshd[5947]: Accepted publickey for core from 10.200.16.10 port 35042 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:00.039782 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:00.044690 systemd-logind[1695]: New session 20 of user core. Jun 25 18:47:00.049521 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:47:01.511912 sshd[5947]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:01.515757 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:35042.service: Deactivated successfully. Jun 25 18:47:01.519005 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:47:01.520920 systemd-logind[1695]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:47:01.522167 systemd-logind[1695]: Removed session 20. Jun 25 18:47:01.631679 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:35048.service - OpenSSH per-connection server daemon (10.200.16.10:35048). Jun 25 18:47:02.267868 sshd[5966]: Accepted publickey for core from 10.200.16.10 port 35048 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:02.269464 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:02.274392 systemd-logind[1695]: New session 21 of user core. Jun 25 18:47:02.281540 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:47:02.981347 sshd[5966]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:02.984434 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:35048.service: Deactivated successfully. Jun 25 18:47:02.986698 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:47:02.988799 systemd-logind[1695]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:47:02.990043 systemd-logind[1695]: Removed session 21. Jun 25 18:47:03.103775 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:35058.service - OpenSSH per-connection server daemon (10.200.16.10:35058). Jun 25 18:47:03.750242 sshd[5978]: Accepted publickey for core from 10.200.16.10 port 35058 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:03.751981 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:03.759183 systemd-logind[1695]: New session 22 of user core. Jun 25 18:47:03.763512 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:47:04.258172 sshd[5978]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:04.261471 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:35058.service: Deactivated successfully. Jun 25 18:47:04.263828 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:47:04.266429 systemd-logind[1695]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:47:04.267707 systemd-logind[1695]: Removed session 22. Jun 25 18:47:09.382597 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:36506.service - OpenSSH per-connection server daemon (10.200.16.10:36506). Jun 25 18:47:10.046992 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 36506 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:10.048761 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:10.054942 systemd-logind[1695]: New session 23 of user core. Jun 25 18:47:10.061558 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:47:10.557048 sshd[6017]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:10.561585 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:36506.service: Deactivated successfully. Jun 25 18:47:10.563723 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:47:10.564681 systemd-logind[1695]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:47:10.565724 systemd-logind[1695]: Removed session 23. Jun 25 18:47:15.677010 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:53600.service - OpenSSH per-connection server daemon (10.200.16.10:53600). Jun 25 18:47:16.333698 sshd[6035]: Accepted publickey for core from 10.200.16.10 port 53600 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:16.335244 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:16.339378 systemd-logind[1695]: New session 24 of user core. Jun 25 18:47:16.344522 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:47:16.847805 sshd[6035]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:16.852608 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:53600.service: Deactivated successfully. Jun 25 18:47:16.855659 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:47:16.856577 systemd-logind[1695]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:47:16.857632 systemd-logind[1695]: Removed session 24. Jun 25 18:47:21.967364 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:53610.service - OpenSSH per-connection server daemon (10.200.16.10:53610). Jun 25 18:47:22.617504 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 53610 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:22.619270 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:22.624108 systemd-logind[1695]: New session 25 of user core. Jun 25 18:47:22.630854 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:47:23.129343 sshd[6073]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:23.132644 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:53610.service: Deactivated successfully. Jun 25 18:47:23.134906 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:47:23.136630 systemd-logind[1695]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:47:23.137995 systemd-logind[1695]: Removed session 25. Jun 25 18:47:28.250642 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:54374.service - OpenSSH per-connection server daemon (10.200.16.10:54374). Jun 25 18:47:28.898907 sshd[6111]: Accepted publickey for core from 10.200.16.10 port 54374 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:28.900452 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:28.905295 systemd-logind[1695]: New session 26 of user core. Jun 25 18:47:28.910527 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:47:29.412893 sshd[6111]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:29.415902 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:54374.service: Deactivated successfully. Jun 25 18:47:29.418769 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:47:29.421523 systemd-logind[1695]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:47:29.422771 systemd-logind[1695]: Removed session 26. Jun 25 18:47:34.526613 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:54380.service - OpenSSH per-connection server daemon (10.200.16.10:54380). Jun 25 18:47:35.179411 sshd[6128]: Accepted publickey for core from 10.200.16.10 port 54380 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:35.180928 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:35.185688 systemd-logind[1695]: New session 27 of user core. Jun 25 18:47:35.190791 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:47:35.700040 sshd[6128]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:35.704416 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:54380.service: Deactivated successfully. Jun 25 18:47:35.707227 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:47:35.708253 systemd-logind[1695]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:47:35.709799 systemd-logind[1695]: Removed session 27. Jun 25 18:47:40.815438 systemd[1]: Started sshd@25-10.200.8.39:22-10.200.16.10:48362.service - OpenSSH per-connection server daemon (10.200.16.10:48362). Jun 25 18:47:41.466235 sshd[6146]: Accepted publickey for core from 10.200.16.10 port 48362 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:41.467826 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:41.472750 systemd-logind[1695]: New session 28 of user core. Jun 25 18:47:41.477527 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:47:41.975793 sshd[6146]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:41.979822 systemd[1]: sshd@25-10.200.8.39:22-10.200.16.10:48362.service: Deactivated successfully. Jun 25 18:47:41.981940 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:47:41.982984 systemd-logind[1695]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:47:41.984006 systemd-logind[1695]: Removed session 28. Jun 25 18:47:56.426711 systemd[1]: cri-containerd-91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e.scope: Deactivated successfully. Jun 25 18:47:56.427609 systemd[1]: cri-containerd-91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e.scope: Consumed 3.686s CPU time, 22.1M memory peak, 0B memory swap peak. Jun 25 18:47:56.453616 containerd[1720]: time="2024-06-25T18:47:56.453521024Z" level=info msg="shim disconnected" id=91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e namespace=k8s.io Jun 25 18:47:56.454052 containerd[1720]: time="2024-06-25T18:47:56.453619925Z" level=warning msg="cleaning up after shim disconnected" id=91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e namespace=k8s.io Jun 25 18:47:56.454052 containerd[1720]: time="2024-06-25T18:47:56.453632625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:47:56.454959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e-rootfs.mount: Deactivated successfully. Jun 25 18:47:57.486129 kubelet[3151]: I0625 18:47:57.486016 3151 scope.go:117] "RemoveContainer" containerID="91d94e0f1655c56943a6706b7b7093917d6f56a0602b6882eb68ef24b77fe83e" Jun 25 18:47:57.493760 containerd[1720]: time="2024-06-25T18:47:57.493593558Z" level=info msg="CreateContainer within sandbox \"4f813945b56fe06cf3a002c1a26967eb2179604e9787c227a747472a489850eb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 18:47:57.522074 containerd[1720]: time="2024-06-25T18:47:57.522035121Z" level=info msg="CreateContainer within sandbox \"4f813945b56fe06cf3a002c1a26967eb2179604e9787c227a747472a489850eb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2414df25c3330586dea58a8b3231948936e52bfd1b74cd69291fdf3c310202e3\"" Jun 25 18:47:57.522640 containerd[1720]: time="2024-06-25T18:47:57.522610323Z" level=info msg="StartContainer for \"2414df25c3330586dea58a8b3231948936e52bfd1b74cd69291fdf3c310202e3\"" Jun 25 18:47:57.558391 systemd[1]: run-containerd-runc-k8s.io-2414df25c3330586dea58a8b3231948936e52bfd1b74cd69291fdf3c310202e3-runc.5LPPos.mount: Deactivated successfully. Jun 25 18:47:57.568527 systemd[1]: Started cri-containerd-2414df25c3330586dea58a8b3231948936e52bfd1b74cd69291fdf3c310202e3.scope - libcontainer container 2414df25c3330586dea58a8b3231948936e52bfd1b74cd69291fdf3c310202e3. Jun 25 18:47:57.620830 containerd[1720]: time="2024-06-25T18:47:57.620602643Z" level=info msg="StartContainer for \"2414df25c3330586dea58a8b3231948936e52bfd1b74cd69291fdf3c310202e3\" returns successfully" Jun 25 18:47:58.105782 systemd[1]: cri-containerd-a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e.scope: Deactivated successfully. Jun 25 18:47:58.106112 systemd[1]: cri-containerd-a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e.scope: Consumed 5.115s CPU time. Jun 25 18:47:58.121730 kubelet[3151]: E0625 18:47:58.121693 3151 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-7f29c71dfa?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:47:58.136819 containerd[1720]: time="2024-06-25T18:47:58.136748201Z" level=info msg="shim disconnected" id=a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e namespace=k8s.io Jun 25 18:47:58.136819 containerd[1720]: time="2024-06-25T18:47:58.136815101Z" level=warning msg="cleaning up after shim disconnected" id=a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e namespace=k8s.io Jun 25 18:47:58.136819 containerd[1720]: time="2024-06-25T18:47:58.136825501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:47:58.139918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e-rootfs.mount: Deactivated successfully. Jun 25 18:47:58.161182 containerd[1720]: time="2024-06-25T18:47:58.161110755Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:47:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:47:58.490222 kubelet[3151]: I0625 18:47:58.489839 3151 scope.go:117] "RemoveContainer" containerID="a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e" Jun 25 18:47:58.493220 containerd[1720]: time="2024-06-25T18:47:58.493123100Z" level=info msg="CreateContainer within sandbox \"1254a75df7b306a9028b86764944195b6531d82c40836fb113a930db023935f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 18:47:58.531751 containerd[1720]: time="2024-06-25T18:47:58.531703187Z" level=info msg="CreateContainer within sandbox \"1254a75df7b306a9028b86764944195b6531d82c40836fb113a930db023935f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f\"" Jun 25 18:47:58.532311 containerd[1720]: time="2024-06-25T18:47:58.532275488Z" level=info msg="StartContainer for \"533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f\"" Jun 25 18:47:58.566517 systemd[1]: Started cri-containerd-533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f.scope - libcontainer container 533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f. Jun 25 18:47:58.593390 containerd[1720]: time="2024-06-25T18:47:58.593305625Z" level=info msg="StartContainer for \"533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f\" returns successfully" Jun 25 18:48:01.914285 kubelet[3151]: E0625 18:48:01.912736 3151 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:46410->10.200.8.12:2379: read: connection timed out" Jun 25 18:48:01.913570 systemd[1]: cri-containerd-ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013.scope: Deactivated successfully. Jun 25 18:48:01.913956 systemd[1]: cri-containerd-ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013.scope: Consumed 2.169s CPU time, 15.6M memory peak, 0B memory swap peak. Jun 25 18:48:01.944247 containerd[1720]: time="2024-06-25T18:48:01.942284298Z" level=info msg="shim disconnected" id=ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013 namespace=k8s.io Jun 25 18:48:01.944247 containerd[1720]: time="2024-06-25T18:48:01.942418698Z" level=warning msg="cleaning up after shim disconnected" id=ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013 namespace=k8s.io Jun 25 18:48:01.944247 containerd[1720]: time="2024-06-25T18:48:01.942440198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:01.943824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013-rootfs.mount: Deactivated successfully. Jun 25 18:48:02.424593 kubelet[3151]: E0625 18:48:02.424449 3151 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-4012.0.0-a-7f29c71dfa.17dc53c7031bf1b2", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-4012.0.0-a-7f29c71dfa", UID:"eca0508d5e75459d0944c293356ddcb3", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.0.0-a-7f29c71dfa"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 47, 51, 964447154, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 47, 51, 964447154, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.0.0-a-7f29c71dfa"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:46238->10.200.8.12:2379: read: connection timed out' (will not retry!) Jun 25 18:48:02.504009 kubelet[3151]: I0625 18:48:02.503976 3151 scope.go:117] "RemoveContainer" containerID="ab39bc059e8b1f2004a4fb0100f9eb8de7f9b525984e8b574ba3671e3a732013" Jun 25 18:48:02.506345 containerd[1720]: time="2024-06-25T18:48:02.506303076Z" level=info msg="CreateContainer within sandbox \"bfa62a732a3938a632b9e074ca80f39342ad09290950f5092dc652ff17568f9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 18:48:02.538538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732438234.mount: Deactivated successfully. Jun 25 18:48:02.544771 containerd[1720]: time="2024-06-25T18:48:02.544724243Z" level=info msg="CreateContainer within sandbox \"bfa62a732a3938a632b9e074ca80f39342ad09290950f5092dc652ff17568f9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d601887de313810966ca28e67653e4316c27cebc018949afb386d74077cca1d2\"" Jun 25 18:48:02.545379 containerd[1720]: time="2024-06-25T18:48:02.545317844Z" level=info msg="StartContainer for \"d601887de313810966ca28e67653e4316c27cebc018949afb386d74077cca1d2\"" Jun 25 18:48:02.578690 systemd[1]: Started cri-containerd-d601887de313810966ca28e67653e4316c27cebc018949afb386d74077cca1d2.scope - libcontainer container d601887de313810966ca28e67653e4316c27cebc018949afb386d74077cca1d2. Jun 25 18:48:02.622057 containerd[1720]: time="2024-06-25T18:48:02.622000277Z" level=info msg="StartContainer for \"d601887de313810966ca28e67653e4316c27cebc018949afb386d74077cca1d2\" returns successfully" Jun 25 18:48:07.928629 kubelet[3151]: I0625 18:48:07.928581 3151 status_manager.go:853] "Failed to get status for pod" podUID="cca5e0e776d363741373c442e3d8197e" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-7f29c71dfa" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:46332->10.200.8.12:2379: read: connection timed out" Jun 25 18:48:10.107950 systemd[1]: cri-containerd-533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f.scope: Deactivated successfully. Jun 25 18:48:10.130617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f-rootfs.mount: Deactivated successfully. Jun 25 18:48:10.152462 containerd[1720]: time="2024-06-25T18:48:10.152382751Z" level=info msg="shim disconnected" id=533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f namespace=k8s.io Jun 25 18:48:10.152462 containerd[1720]: time="2024-06-25T18:48:10.152461851Z" level=warning msg="cleaning up after shim disconnected" id=533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f namespace=k8s.io Jun 25 18:48:10.153005 containerd[1720]: time="2024-06-25T18:48:10.152477451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:10.524584 kubelet[3151]: I0625 18:48:10.524454 3151 scope.go:117] "RemoveContainer" containerID="a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e" Jun 25 18:48:10.525077 kubelet[3151]: I0625 18:48:10.524788 3151 scope.go:117] "RemoveContainer" containerID="533f12fd41acaae38781495050e2d56def6862da4c3dad10cd0c637c16ac381f" Jun 25 18:48:10.525152 kubelet[3151]: E0625 18:48:10.525137 3151 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-6c594_tigera-operator(781ae907-1977-45fb-8800-6158e586210b)\"" pod="tigera-operator/tigera-operator-76c4974c85-6c594" podUID="781ae907-1977-45fb-8800-6158e586210b" Jun 25 18:48:10.526469 containerd[1720]: time="2024-06-25T18:48:10.526426559Z" level=info msg="RemoveContainer for \"a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e\"" Jun 25 18:48:10.535001 containerd[1720]: time="2024-06-25T18:48:10.534960175Z" level=info msg="RemoveContainer for \"a2a0bf078c8b82a55b662cff226f40d1e36a4048f5690bcba14cc420d3a4b45e\" returns successfully" Jun 25 18:48:11.913217 kubelet[3151]: E0625 18:48:11.913169 3151 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-7f29c71dfa?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"