Jun 25 18:42:39.078010 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:42:39.078048 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:39.078063 kernel: BIOS-provided physical RAM map: Jun 25 18:42:39.078075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:42:39.078085 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 18:42:39.078096 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 18:42:39.078109 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 18:42:39.078125 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 18:42:39.078136 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 18:42:39.078147 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 18:42:39.078157 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 18:42:39.078169 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 18:42:39.078181 kernel: printk: bootconsole [earlyser0] enabled Jun 25 18:42:39.078194 kernel: NX (Execute Disable) protection: active Jun 25 18:42:39.078212 kernel: APIC: Static calls initialized Jun 25 18:42:39.078226 kernel: efi: EFI v2.7 by Microsoft Jun 25 18:42:39.078241 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jun 25 18:42:39.078254 kernel: SMBIOS 3.1.0 present. Jun 25 18:42:39.078268 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 18:42:39.078283 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 18:42:39.078298 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 18:42:39.078311 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 18:42:39.078323 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 18:42:39.078336 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 18:42:39.078353 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 18:42:39.078366 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:39.078381 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:39.078398 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 18:42:39.078414 kernel: tsc: Detected 2593.907 MHz processor Jun 25 18:42:39.078430 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:42:39.078443 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:42:39.078457 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 18:42:39.078469 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:42:39.078484 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:42:39.078497 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 18:42:39.078509 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 18:42:39.078522 kernel: Using GB pages for direct mapping Jun 25 18:42:39.078533 kernel: Secure boot disabled Jun 25 18:42:39.078546 kernel: ACPI: Early table checksum verification disabled Jun 25 18:42:39.078558 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 18:42:39.078577 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078593 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078607 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:42:39.078621 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 18:42:39.078635 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078648 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078663 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078679 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078693 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078706 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078720 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078734 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 18:42:39.078748 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 18:42:39.078762 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 18:42:39.078776 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 18:42:39.078792 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 18:42:39.078806 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 18:42:39.078820 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 18:42:39.078834 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 18:42:39.078848 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 18:42:39.078962 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 18:42:39.078971 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:42:39.078979 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:42:39.078990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 18:42:39.079001 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 18:42:39.079009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 18:42:39.079016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 18:42:39.079026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 18:42:39.079035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 18:42:39.079042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 18:42:39.079053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 18:42:39.079060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 18:42:39.079068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 18:42:39.079078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 18:42:39.079088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 18:42:39.079096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 18:42:39.079103 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 18:42:39.079111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 18:42:39.079118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 18:42:39.079129 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 18:42:39.079137 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 18:42:39.079144 kernel: Zone ranges: Jun 25 18:42:39.079156 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:42:39.079164 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 18:42:39.079171 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:39.079179 kernel: Movable zone start for each node Jun 25 18:42:39.079187 kernel: Early memory node ranges Jun 25 18:42:39.079197 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:42:39.079204 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 18:42:39.079212 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 18:42:39.079221 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:39.079231 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 18:42:39.079239 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:42:39.079246 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:42:39.079255 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 18:42:39.079264 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 18:42:39.079272 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 18:42:39.079279 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:42:39.079289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:42:39.079297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:42:39.079307 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 18:42:39.079315 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:42:39.079325 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 18:42:39.079333 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 18:42:39.079340 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:42:39.079348 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:42:39.079358 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:42:39.079366 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:42:39.079373 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:42:39.079383 kernel: Hyper-V: PV spinlocks enabled Jun 25 18:42:39.079393 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:42:39.079402 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:39.079410 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:42:39.079420 kernel: random: crng init done Jun 25 18:42:39.079428 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 18:42:39.079435 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:42:39.079443 kernel: Fallback order for Node 0: 0 Jun 25 18:42:39.079455 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 18:42:39.079469 kernel: Policy zone: Normal Jun 25 18:42:39.079480 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:42:39.079490 kernel: software IO TLB: area num 2. Jun 25 18:42:39.079499 kernel: Memory: 8070924K/8387460K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 316276K reserved, 0K cma-reserved) Jun 25 18:42:39.079507 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:42:39.079515 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:42:39.079526 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:42:39.079534 kernel: Dynamic Preempt: voluntary Jun 25 18:42:39.079542 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:42:39.079553 kernel: rcu: RCU event tracing is enabled. Jun 25 18:42:39.079564 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:42:39.079572 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:42:39.079580 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:42:39.079588 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:42:39.079599 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:42:39.079609 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:42:39.079620 kernel: Using NULL legacy PIC Jun 25 18:42:39.079628 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 18:42:39.079636 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:42:39.079644 kernel: Console: colour dummy device 80x25 Jun 25 18:42:39.079652 kernel: printk: console [tty1] enabled Jun 25 18:42:39.079660 kernel: printk: console [ttyS0] enabled Jun 25 18:42:39.079671 kernel: printk: bootconsole [earlyser0] disabled Jun 25 18:42:39.079679 kernel: ACPI: Core revision 20230628 Jun 25 18:42:39.079687 kernel: Failed to register legacy timer interrupt Jun 25 18:42:39.079697 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:42:39.079708 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:42:39.079716 kernel: Hyper-V: Using IPI hypercalls Jun 25 18:42:39.079724 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 25 18:42:39.079732 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 25 18:42:39.079743 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 25 18:42:39.079751 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 25 18:42:39.079760 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 25 18:42:39.079770 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 25 18:42:39.079780 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jun 25 18:42:39.079790 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:42:39.079799 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:42:39.079807 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:42:39.079818 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:42:39.079826 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:42:39.079834 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:42:39.079845 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:42:39.079864 kernel: RETBleed: Vulnerable Jun 25 18:42:39.079875 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:42:39.079883 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:39.079894 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:39.079901 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:42:39.079913 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:42:39.079920 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:42:39.079929 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:42:39.079939 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:42:39.079947 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:42:39.079957 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:42:39.079966 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:42:39.079976 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 18:42:39.079987 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 18:42:39.079995 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 18:42:39.080004 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 18:42:39.080014 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:42:39.080021 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:42:39.080032 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:42:39.080040 kernel: SELinux: Initializing. Jun 25 18:42:39.080049 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:39.080059 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:39.080070 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:42:39.080078 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:39.080089 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:39.080100 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:39.080108 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:42:39.080119 kernel: signal: max sigframe size: 3632 Jun 25 18:42:39.080127 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:42:39.080137 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:42:39.080146 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:42:39.080154 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:42:39.080165 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:42:39.080175 kernel: .... node #0, CPUs: #1 Jun 25 18:42:39.080185 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 18:42:39.080195 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:42:39.080203 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:42:39.080214 kernel: smpboot: Max logical packages: 1 Jun 25 18:42:39.080222 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 18:42:39.080230 kernel: devtmpfs: initialized Jun 25 18:42:39.080241 kernel: x86/mm: Memory block size: 128MB Jun 25 18:42:39.080251 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 18:42:39.080262 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:42:39.080271 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:42:39.080279 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:42:39.080290 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:42:39.080298 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:42:39.080306 kernel: audit: type=2000 audit(1719340957.028:1): state=initialized audit_enabled=0 res=1 Jun 25 18:42:39.080316 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:42:39.080324 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:42:39.080337 kernel: cpuidle: using governor menu Jun 25 18:42:39.080345 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:42:39.080353 kernel: dca service started, version 1.12.1 Jun 25 18:42:39.080364 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 18:42:39.080372 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:42:39.080381 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:42:39.080391 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:42:39.080399 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:42:39.080410 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:42:39.080421 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:42:39.080429 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:42:39.080440 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:42:39.080448 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:42:39.080456 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:42:39.080464 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:42:39.080474 kernel: ACPI: Interpreter enabled Jun 25 18:42:39.080484 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:42:39.080493 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:42:39.080504 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:42:39.080515 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 18:42:39.080523 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 18:42:39.080533 kernel: iommu: Default domain type: Translated Jun 25 18:42:39.080542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:42:39.080550 kernel: efivars: Registered efivars operations Jun 25 18:42:39.080561 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:42:39.080569 kernel: PCI: System does not support PCI Jun 25 18:42:39.080577 kernel: vgaarb: loaded Jun 25 18:42:39.080589 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 18:42:39.080597 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:42:39.080607 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:42:39.080616 kernel: pnp: PnP ACPI init Jun 25 18:42:39.080624 kernel: pnp: PnP ACPI: found 3 devices Jun 25 18:42:39.080635 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:42:39.080643 kernel: NET: Registered PF_INET protocol family Jun 25 18:42:39.080652 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:42:39.080663 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 18:42:39.080673 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:42:39.080684 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:42:39.080692 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 18:42:39.080700 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 18:42:39.080711 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:39.080719 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:39.080728 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:42:39.080739 kernel: NET: Registered PF_XDP protocol family Jun 25 18:42:39.080747 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:42:39.080760 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 18:42:39.080768 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jun 25 18:42:39.080776 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:42:39.080787 kernel: Initialise system trusted keyrings Jun 25 18:42:39.080795 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 18:42:39.080804 kernel: Key type asymmetric registered Jun 25 18:42:39.080813 kernel: Asymmetric key parser 'x509' registered Jun 25 18:42:39.080821 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:42:39.080832 kernel: io scheduler mq-deadline registered Jun 25 18:42:39.080842 kernel: io scheduler kyber registered Jun 25 18:42:39.080857 kernel: io scheduler bfq registered Jun 25 18:42:39.080867 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:42:39.080875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:42:39.080885 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:42:39.080894 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 18:42:39.080905 kernel: i8042: PNP: No PS/2 controller found. Jun 25 18:42:39.081039 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 18:42:39.081130 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T18:42:38 UTC (1719340958) Jun 25 18:42:39.081215 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 18:42:39.081226 kernel: intel_pstate: CPU model not supported Jun 25 18:42:39.081237 kernel: efifb: probing for efifb Jun 25 18:42:39.081246 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:42:39.081254 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:42:39.081265 kernel: efifb: scrolling: redraw Jun 25 18:42:39.081273 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:42:39.081285 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:42:39.081294 kernel: fb0: EFI VGA frame buffer device Jun 25 18:42:39.081302 kernel: pstore: Using crash dump compression: deflate Jun 25 18:42:39.081313 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:42:39.081321 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:42:39.081329 kernel: Segment Routing with IPv6 Jun 25 18:42:39.081340 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:42:39.081349 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:42:39.081360 kernel: Key type dns_resolver registered Jun 25 18:42:39.081368 kernel: IPI shorthand broadcast: enabled Jun 25 18:42:39.081381 kernel: sched_clock: Marking stable (852006200, 47932100)->(1121090700, -221152400) Jun 25 18:42:39.081389 kernel: registered taskstats version 1 Jun 25 18:42:39.081397 kernel: Loading compiled-in X.509 certificates Jun 25 18:42:39.081408 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:42:39.081416 kernel: Key type .fscrypt registered Jun 25 18:42:39.081424 kernel: Key type fscrypt-provisioning registered Jun 25 18:42:39.081435 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:42:39.081443 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:42:39.081456 kernel: ima: No architecture policies found Jun 25 18:42:39.081464 kernel: clk: Disabling unused clocks Jun 25 18:42:39.081472 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:42:39.081483 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:42:39.081492 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:42:39.081501 kernel: Run /init as init process Jun 25 18:42:39.081510 kernel: with arguments: Jun 25 18:42:39.081518 kernel: /init Jun 25 18:42:39.081528 kernel: with environment: Jun 25 18:42:39.081539 kernel: HOME=/ Jun 25 18:42:39.081547 kernel: TERM=linux Jun 25 18:42:39.081558 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:42:39.081568 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:42:39.081581 systemd[1]: Detected virtualization microsoft. Jun 25 18:42:39.081590 systemd[1]: Detected architecture x86-64. Jun 25 18:42:39.081599 systemd[1]: Running in initrd. Jun 25 18:42:39.081609 systemd[1]: No hostname configured, using default hostname. Jun 25 18:42:39.081620 systemd[1]: Hostname set to . Jun 25 18:42:39.081632 systemd[1]: Initializing machine ID from random generator. Jun 25 18:42:39.081640 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:42:39.081649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:39.081660 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:39.081669 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:42:39.081680 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:42:39.081689 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:42:39.081701 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:42:39.081712 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:42:39.081721 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:42:39.081732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:39.081742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:39.081752 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:42:39.081764 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:42:39.081783 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:42:39.081800 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:42:39.081823 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:39.081846 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:39.081882 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:42:39.081904 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:42:39.081922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:39.081939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:39.081959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:39.081976 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:42:39.081996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:42:39.082017 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:42:39.082039 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:42:39.082058 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:42:39.082078 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:42:39.082096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:42:39.082136 systemd-journald[176]: Collecting audit messages is disabled. Jun 25 18:42:39.082180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:39.082196 systemd-journald[176]: Journal started Jun 25 18:42:39.082235 systemd-journald[176]: Runtime Journal (/run/log/journal/13f8c2f3939f42f98d18dea483303dde) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:42:39.091875 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:42:39.099473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:39.102704 systemd-modules-load[177]: Inserted module 'overlay' Jun 25 18:42:39.110044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:39.115653 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:42:39.140004 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:42:39.152906 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:42:39.155562 systemd-modules-load[177]: Inserted module 'br_netfilter' Jun 25 18:42:39.157929 kernel: Bridge firewalling registered Jun 25 18:42:39.166018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:42:39.170068 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:39.179722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:39.180091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:39.180405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:39.184981 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:39.189985 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:42:39.195065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:42:39.219350 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:39.223749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:39.230043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:39.243042 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:42:39.248978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:42:39.268870 dracut-cmdline[211]: dracut-dracut-053 Jun 25 18:42:39.273483 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:39.304554 systemd-resolved[213]: Positive Trust Anchors: Jun 25 18:42:39.304574 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:42:39.304628 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:42:39.309952 systemd-resolved[213]: Defaulting to hostname 'linux'. Jun 25 18:42:39.310809 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:42:39.330831 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:39.372876 kernel: SCSI subsystem initialized Jun 25 18:42:39.384870 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:42:39.397876 kernel: iscsi: registered transport (tcp) Jun 25 18:42:39.423790 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:42:39.423842 kernel: QLogic iSCSI HBA Driver Jun 25 18:42:39.458938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:39.468001 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:42:39.497378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:42:39.497440 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:42:39.500920 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:42:39.545878 kernel: raid6: avx512x4 gen() 18391 MB/s Jun 25 18:42:39.564865 kernel: raid6: avx512x2 gen() 18175 MB/s Jun 25 18:42:39.583862 kernel: raid6: avx512x1 gen() 18184 MB/s Jun 25 18:42:39.602866 kernel: raid6: avx2x4 gen() 18235 MB/s Jun 25 18:42:39.621862 kernel: raid6: avx2x2 gen() 18171 MB/s Jun 25 18:42:39.642618 kernel: raid6: avx2x1 gen() 13749 MB/s Jun 25 18:42:39.642647 kernel: raid6: using algorithm avx512x4 gen() 18391 MB/s Jun 25 18:42:39.664912 kernel: raid6: .... xor() 8063 MB/s, rmw enabled Jun 25 18:42:39.664944 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:42:39.691886 kernel: xor: automatically using best checksumming function avx Jun 25 18:42:39.861878 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:42:39.871052 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:39.884064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:39.900624 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jun 25 18:42:39.905082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:39.921245 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:42:39.932352 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jun 25 18:42:39.958579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:39.968022 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:42:40.009444 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:40.023999 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:42:40.055155 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:40.062518 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:40.070167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:40.076781 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:42:40.089077 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:42:40.102783 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:42:40.104625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:40.104812 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:40.108412 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:40.125879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:40.129001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:40.132221 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:40.152374 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:42:40.151932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:40.162832 kernel: AES CTR mode by8 optimization enabled Jun 25 18:42:40.165654 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:40.169192 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 18:42:40.175121 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:40.177923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:40.189117 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:42:40.198881 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:42:40.200041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:40.239324 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:42:40.239353 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:42:40.239371 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:42:40.239395 kernel: scsi host1: storvsc_host_t Jun 25 18:42:40.239584 kernel: scsi host0: storvsc_host_t Jun 25 18:42:40.239735 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:42:40.239919 kernel: PTP clock support registered Jun 25 18:42:40.239938 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:42:40.245285 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:42:40.245332 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:42:40.247697 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:42:40.251236 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:42:40.253303 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:42:40.257878 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:42:41.155747 systemd-resolved[213]: Clock change detected. Flushing caches. Jun 25 18:42:41.164375 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:42:41.178815 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:42:41.178858 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:42:41.178879 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:42:41.180095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:41.192498 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:41.213347 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:42:41.215925 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:42:41.215948 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:42:41.235924 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:42:41.251442 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:42:41.251634 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:42:41.251807 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:42:41.251974 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:42:41.252143 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:41.252164 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:42:41.240432 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:41.367491 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: VF slot 1 added Jun 25 18:42:41.376304 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:42:41.381912 kernel: hv_pci 4f2700fb-3c17-498f-a28a-252480473288: PCI VMBus probing: Using version 0x10004 Jun 25 18:42:41.425464 kernel: hv_pci 4f2700fb-3c17-498f-a28a-252480473288: PCI host bridge to bus 3c17:00 Jun 25 18:42:41.425630 kernel: pci_bus 3c17:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 18:42:41.425821 kernel: pci_bus 3c17:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:42:41.425979 kernel: pci 3c17:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 18:42:41.426494 kernel: pci 3c17:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:41.426664 kernel: pci 3c17:00:02.0: enabling Extended Tags Jun 25 18:42:41.426843 kernel: pci 3c17:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3c17:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 18:42:41.427021 kernel: pci_bus 3c17:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:42:41.427173 kernel: pci 3c17:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:41.616630 kernel: mlx5_core 3c17:00:02.0: enabling device (0000 -> 0002) Jun 25 18:42:41.859974 kernel: mlx5_core 3c17:00:02.0: firmware version: 14.30.1284 Jun 25 18:42:41.860180 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Jun 25 18:42:41.860202 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: VF registering: eth1 Jun 25 18:42:41.861151 kernel: mlx5_core 3c17:00:02.0 eth1: joined to eth0 Jun 25 18:42:41.861378 kernel: mlx5_core 3c17:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 25 18:42:41.756291 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:42:41.832555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:42:41.875288 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (450) Jun 25 18:42:41.880287 kernel: mlx5_core 3c17:00:02.0 enP15383s1: renamed from eth1 Jun 25 18:42:41.897835 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:42:41.909152 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:42:41.912691 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:42:41.931402 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:42:41.944327 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:41.952302 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:42.958658 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:42.959113 disk-uuid[604]: The operation has completed successfully. Jun 25 18:42:43.037817 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:42:43.037943 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:42:43.061418 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:42:43.067448 sh[717]: Success Jun 25 18:42:43.100445 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:42:43.319413 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:42:43.336390 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:42:43.336729 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:42:43.355283 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:42:43.355323 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:43.360872 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:42:43.363765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:42:43.366275 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:42:43.758676 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:42:43.764953 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:42:43.774425 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:42:43.781397 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:42:43.791591 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:43.797587 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:43.797635 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:43.821301 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:43.830412 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:42:43.837292 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:43.843702 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:42:43.855406 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:42:43.884079 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:43.894522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:42:43.916212 systemd-networkd[901]: lo: Link UP Jun 25 18:42:43.916222 systemd-networkd[901]: lo: Gained carrier Jun 25 18:42:43.918251 systemd-networkd[901]: Enumeration completed Jun 25 18:42:43.918544 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:42:43.921207 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:43.921213 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:42:43.927684 systemd[1]: Reached target network.target - Network. Jun 25 18:42:43.985292 kernel: mlx5_core 3c17:00:02.0 enP15383s1: Link up Jun 25 18:42:44.016306 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: Data path switched to VF: enP15383s1 Jun 25 18:42:44.016633 systemd-networkd[901]: enP15383s1: Link UP Jun 25 18:42:44.016805 systemd-networkd[901]: eth0: Link UP Jun 25 18:42:44.017200 systemd-networkd[901]: eth0: Gained carrier Jun 25 18:42:44.017215 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:44.029131 systemd-networkd[901]: enP15383s1: Gained carrier Jun 25 18:42:44.068327 systemd-networkd[901]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:42:44.661456 ignition[853]: Ignition 2.19.0 Jun 25 18:42:44.661469 ignition[853]: Stage: fetch-offline Jun 25 18:42:44.663143 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:44.661521 ignition[853]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.661532 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.661656 ignition[853]: parsed url from cmdline: "" Jun 25 18:42:44.661662 ignition[853]: no config URL provided Jun 25 18:42:44.661669 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:44.681383 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:42:44.661681 ignition[853]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:44.661688 ignition[853]: failed to fetch config: resource requires networking Jun 25 18:42:44.662222 ignition[853]: Ignition finished successfully Jun 25 18:42:44.699554 ignition[910]: Ignition 2.19.0 Jun 25 18:42:44.699567 ignition[910]: Stage: fetch Jun 25 18:42:44.699797 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.699811 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.699913 ignition[910]: parsed url from cmdline: "" Jun 25 18:42:44.699916 ignition[910]: no config URL provided Jun 25 18:42:44.699921 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:44.699930 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:44.699953 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:42:44.791834 ignition[910]: GET result: OK Jun 25 18:42:44.792016 ignition[910]: config has been read from IMDS userdata Jun 25 18:42:44.792068 ignition[910]: parsing config with SHA512: efc785744051ed65d2c1d45e6107f5231a00cad6184542730f95a8f5da3c4935b8e94c8963fed9cad67221aa8b3be71b657ff2be7225ad437ebb27b7e888496b Jun 25 18:42:44.796589 unknown[910]: fetched base config from "system" Jun 25 18:42:44.797103 ignition[910]: fetch: fetch complete Jun 25 18:42:44.796596 unknown[910]: fetched base config from "system" Jun 25 18:42:44.797110 ignition[910]: fetch: fetch passed Jun 25 18:42:44.796603 unknown[910]: fetched user config from "azure" Jun 25 18:42:44.797163 ignition[910]: Ignition finished successfully Jun 25 18:42:44.798783 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:42:44.818448 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:42:44.834349 ignition[917]: Ignition 2.19.0 Jun 25 18:42:44.834360 ignition[917]: Stage: kargs Jun 25 18:42:44.834581 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.834593 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.835467 ignition[917]: kargs: kargs passed Jun 25 18:42:44.835513 ignition[917]: Ignition finished successfully Jun 25 18:42:44.847660 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:42:44.858424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:42:44.873809 ignition[924]: Ignition 2.19.0 Jun 25 18:42:44.873819 ignition[924]: Stage: disks Jun 25 18:42:44.875751 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:42:44.874028 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.879018 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:44.874041 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.883345 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:42:44.874891 ignition[924]: disks: disks passed Jun 25 18:42:44.874932 ignition[924]: Ignition finished successfully Jun 25 18:42:44.903521 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:42:44.906445 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:42:44.914699 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:42:44.928404 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:42:44.986193 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:42:44.991495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:42:45.001433 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:42:45.106350 kernel: EXT4-fs (sda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:42:45.106922 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:42:45.110139 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:42:45.162386 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:45.168628 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:42:45.180284 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944) Jun 25 18:42:45.184915 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:42:45.194680 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:45.194719 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:45.194743 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:45.194765 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:45.197959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:42:45.197998 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:45.211283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:45.213882 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:42:45.231559 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:42:45.592435 systemd-networkd[901]: enP15383s1: Gained IPv6LL Jun 25 18:42:45.656401 systemd-networkd[901]: eth0: Gained IPv6LL Jun 25 18:42:45.761986 coreos-metadata[946]: Jun 25 18:42:45.761 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:42:45.769521 coreos-metadata[946]: Jun 25 18:42:45.769 INFO Fetch successful Jun 25 18:42:45.772619 coreos-metadata[946]: Jun 25 18:42:45.769 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:42:45.784332 coreos-metadata[946]: Jun 25 18:42:45.784 INFO Fetch successful Jun 25 18:42:45.800699 coreos-metadata[946]: Jun 25 18:42:45.800 INFO wrote hostname ci-4012.0.0-a-d50f1c7422 to /sysroot/etc/hostname Jun 25 18:42:45.806786 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:45.934480 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:42:45.955794 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:42:45.975656 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:42:45.980696 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:42:46.627027 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:46.640417 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:42:46.645404 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:42:46.658088 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:42:46.664534 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:46.686458 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:42:46.691975 ignition[1062]: INFO : Ignition 2.19.0 Jun 25 18:42:46.691975 ignition[1062]: INFO : Stage: mount Jun 25 18:42:46.691975 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:46.691975 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:46.695151 ignition[1062]: INFO : mount: mount passed Jun 25 18:42:46.695151 ignition[1062]: INFO : Ignition finished successfully Jun 25 18:42:46.693572 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:42:46.713547 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:42:46.733446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:46.746285 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1075) Jun 25 18:42:46.746319 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:46.750281 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:46.754732 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:46.761298 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:46.762580 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:46.787891 ignition[1091]: INFO : Ignition 2.19.0 Jun 25 18:42:46.787891 ignition[1091]: INFO : Stage: files Jun 25 18:42:46.792493 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:46.792493 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:46.792493 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:42:46.802971 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:42:46.802971 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:42:46.871807 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:42:46.876263 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:42:46.876263 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:42:46.876263 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:46.876263 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:42:46.872331 unknown[1091]: wrote ssh authorized keys file for user: core Jun 25 18:42:46.981583 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:42:47.082189 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 25 18:42:47.719490 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:42:48.019915 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:42:48.019915 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:42:48.116841 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: files passed Jun 25 18:42:48.125599 ignition[1091]: INFO : Ignition finished successfully Jun 25 18:42:48.119148 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:42:48.143518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:42:48.164574 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:42:48.168027 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:42:48.168127 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:42:48.181559 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:48.181559 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:48.185838 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:48.182614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:48.201445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:42:48.211440 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:42:48.232482 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:42:48.232585 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:42:48.241898 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:42:48.244591 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:42:48.250075 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:42:48.265685 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:42:48.279765 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:48.288518 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:42:48.301008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:48.304280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:48.313399 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:42:48.318218 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:42:48.318399 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:48.327513 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:42:48.333151 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:42:48.335588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:42:48.343487 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:48.349939 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:48.350097 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:42:48.350542 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:48.351103 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:42:48.352087 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:42:48.352545 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:42:48.352950 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:42:48.353085 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:48.354288 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:48.354744 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:48.355145 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:42:48.374798 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:48.381406 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:42:48.381562 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:48.433833 ignition[1145]: INFO : Ignition 2.19.0 Jun 25 18:42:48.433833 ignition[1145]: INFO : Stage: umount Jun 25 18:42:48.433833 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:48.433833 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:48.433833 ignition[1145]: INFO : umount: umount passed Jun 25 18:42:48.433833 ignition[1145]: INFO : Ignition finished successfully Jun 25 18:42:48.387558 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:42:48.387702 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:48.393712 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:42:48.393851 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:42:48.399397 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:42:48.399533 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:48.412885 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:42:48.465827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:42:48.472869 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:42:48.476253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:48.479939 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:42:48.480042 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:48.493332 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:42:48.493452 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:42:48.501246 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:42:48.501417 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:42:48.509934 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:42:48.509988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:42:48.512504 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:42:48.512544 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:42:48.512912 systemd[1]: Stopped target network.target - Network. Jun 25 18:42:48.513313 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:42:48.513348 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:48.513773 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:42:48.514178 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:42:48.525011 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:48.551068 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:42:48.553708 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:42:48.561092 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:42:48.561143 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:48.568665 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:42:48.568718 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:48.576426 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:42:48.576493 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:42:48.581683 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:42:48.581734 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:48.584953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:42:48.590584 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:42:48.594430 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:42:48.594981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:42:48.595061 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:42:48.599588 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:42:48.599666 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:42:48.605015 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:42:48.607778 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:48.615364 systemd-networkd[901]: eth0: DHCPv6 lease lost Jun 25 18:42:48.618196 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:42:48.618320 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:42:48.624389 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:42:48.624480 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:42:48.631140 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:42:48.631201 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:48.652420 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:42:48.656526 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:42:48.656592 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:48.662565 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:42:48.662618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:48.668524 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:42:48.668565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:48.668660 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:42:48.668697 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:48.669163 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:48.707913 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:42:48.708082 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:48.714509 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:42:48.714592 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:48.737680 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: Data path switched from VF: enP15383s1 Jun 25 18:42:48.720748 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:42:48.720793 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:48.729738 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:42:48.729779 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:48.730349 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:42:48.730383 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:48.731224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:48.731261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:48.752475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:42:48.760494 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:42:48.760557 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:48.766458 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:42:48.766504 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:48.773495 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:42:48.773548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:48.798156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:48.798215 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:48.807221 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:42:48.807357 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:42:48.812674 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:42:48.812760 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:42:48.818755 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:42:48.833423 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:42:48.841710 systemd[1]: Switching root. Jun 25 18:42:48.924185 systemd-journald[176]: Journal stopped Jun 25 18:42:39.078010 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:42:39.078048 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:39.078063 kernel: BIOS-provided physical RAM map: Jun 25 18:42:39.078075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:42:39.078085 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 25 18:42:39.078096 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 25 18:42:39.078109 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jun 25 18:42:39.078125 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jun 25 18:42:39.078136 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 25 18:42:39.078147 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 25 18:42:39.078157 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 25 18:42:39.078169 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 25 18:42:39.078181 kernel: printk: bootconsole [earlyser0] enabled Jun 25 18:42:39.078194 kernel: NX (Execute Disable) protection: active Jun 25 18:42:39.078212 kernel: APIC: Static calls initialized Jun 25 18:42:39.078226 kernel: efi: EFI v2.7 by Microsoft Jun 25 18:42:39.078241 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jun 25 18:42:39.078254 kernel: SMBIOS 3.1.0 present. Jun 25 18:42:39.078268 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 25 18:42:39.078283 kernel: Hypervisor detected: Microsoft Hyper-V Jun 25 18:42:39.078298 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 25 18:42:39.078311 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jun 25 18:42:39.078323 kernel: Hyper-V: Nested features: 0x1e0101 Jun 25 18:42:39.078336 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 25 18:42:39.078353 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 25 18:42:39.078366 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:39.078381 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 25 18:42:39.078398 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 25 18:42:39.078414 kernel: tsc: Detected 2593.907 MHz processor Jun 25 18:42:39.078430 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:42:39.078443 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:42:39.078457 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 25 18:42:39.078469 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:42:39.078484 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:42:39.078497 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 25 18:42:39.078509 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 25 18:42:39.078522 kernel: Using GB pages for direct mapping Jun 25 18:42:39.078533 kernel: Secure boot disabled Jun 25 18:42:39.078546 kernel: ACPI: Early table checksum verification disabled Jun 25 18:42:39.078558 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 25 18:42:39.078577 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078593 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078607 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:42:39.078621 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 25 18:42:39.078635 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078648 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078663 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078679 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078693 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078706 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078720 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:42:39.078734 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 25 18:42:39.078748 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 25 18:42:39.078762 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 25 18:42:39.078776 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 25 18:42:39.078792 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 25 18:42:39.078806 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 25 18:42:39.078820 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 25 18:42:39.078834 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 25 18:42:39.078848 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 25 18:42:39.078962 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 25 18:42:39.078971 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:42:39.078979 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:42:39.078990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 25 18:42:39.079001 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 25 18:42:39.079009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 25 18:42:39.079016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 25 18:42:39.079026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 25 18:42:39.079035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 25 18:42:39.079042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 25 18:42:39.079053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 25 18:42:39.079060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 25 18:42:39.079068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 25 18:42:39.079078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 25 18:42:39.079088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 25 18:42:39.079096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 25 18:42:39.079103 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 25 18:42:39.079111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 25 18:42:39.079118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 25 18:42:39.079129 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 25 18:42:39.079137 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 25 18:42:39.079144 kernel: Zone ranges: Jun 25 18:42:39.079156 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:42:39.079164 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 18:42:39.079171 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:39.079179 kernel: Movable zone start for each node Jun 25 18:42:39.079187 kernel: Early memory node ranges Jun 25 18:42:39.079197 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:42:39.079204 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 25 18:42:39.079212 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 25 18:42:39.079221 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 25 18:42:39.079231 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 25 18:42:39.079239 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:42:39.079246 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:42:39.079255 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 25 18:42:39.079264 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 25 18:42:39.079272 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 25 18:42:39.079279 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:42:39.079289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:42:39.079297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:42:39.079307 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 25 18:42:39.079315 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:42:39.079325 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 25 18:42:39.079333 kernel: Booting paravirtualized kernel on Hyper-V Jun 25 18:42:39.079340 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:42:39.079348 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:42:39.079358 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:42:39.079366 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:42:39.079373 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:42:39.079383 kernel: Hyper-V: PV spinlocks enabled Jun 25 18:42:39.079393 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:42:39.079402 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:39.079410 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:42:39.079420 kernel: random: crng init done Jun 25 18:42:39.079428 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 18:42:39.079435 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:42:39.079443 kernel: Fallback order for Node 0: 0 Jun 25 18:42:39.079455 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 25 18:42:39.079469 kernel: Policy zone: Normal Jun 25 18:42:39.079480 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:42:39.079490 kernel: software IO TLB: area num 2. Jun 25 18:42:39.079499 kernel: Memory: 8070924K/8387460K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 316276K reserved, 0K cma-reserved) Jun 25 18:42:39.079507 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:42:39.079515 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:42:39.079526 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:42:39.079534 kernel: Dynamic Preempt: voluntary Jun 25 18:42:39.079542 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:42:39.079553 kernel: rcu: RCU event tracing is enabled. Jun 25 18:42:39.079564 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:42:39.079572 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:42:39.079580 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:42:39.079588 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:42:39.079599 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:42:39.079609 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:42:39.079620 kernel: Using NULL legacy PIC Jun 25 18:42:39.079628 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 25 18:42:39.079636 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:42:39.079644 kernel: Console: colour dummy device 80x25 Jun 25 18:42:39.079652 kernel: printk: console [tty1] enabled Jun 25 18:42:39.079660 kernel: printk: console [ttyS0] enabled Jun 25 18:42:39.079671 kernel: printk: bootconsole [earlyser0] disabled Jun 25 18:42:39.079679 kernel: ACPI: Core revision 20230628 Jun 25 18:42:39.079687 kernel: Failed to register legacy timer interrupt Jun 25 18:42:39.079697 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:42:39.079708 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:42:39.079716 kernel: Hyper-V: Using IPI hypercalls Jun 25 18:42:39.079724 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 25 18:42:39.079732 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 25 18:42:39.079743 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 25 18:42:39.079751 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 25 18:42:39.079760 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 25 18:42:39.079770 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 25 18:42:39.079780 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jun 25 18:42:39.079790 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:42:39.079799 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:42:39.079807 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:42:39.079818 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:42:39.079826 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:42:39.079834 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:42:39.079845 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:42:39.079864 kernel: RETBleed: Vulnerable Jun 25 18:42:39.079875 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:42:39.079883 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:39.079894 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:42:39.079901 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:42:39.079913 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:42:39.079920 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:42:39.079929 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:42:39.079939 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:42:39.079947 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:42:39.079957 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:42:39.079966 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:42:39.079976 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 25 18:42:39.079987 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 25 18:42:39.079995 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 25 18:42:39.080004 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 25 18:42:39.080014 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:42:39.080021 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:42:39.080032 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:42:39.080040 kernel: SELinux: Initializing. Jun 25 18:42:39.080049 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:39.080059 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:42:39.080070 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:42:39.080078 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:39.080089 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:39.080100 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:42:39.080108 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:42:39.080119 kernel: signal: max sigframe size: 3632 Jun 25 18:42:39.080127 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:42:39.080137 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:42:39.080146 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:42:39.080154 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:42:39.080165 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:42:39.080175 kernel: .... node #0, CPUs: #1 Jun 25 18:42:39.080185 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 25 18:42:39.080195 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:42:39.080203 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:42:39.080214 kernel: smpboot: Max logical packages: 1 Jun 25 18:42:39.080222 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 25 18:42:39.080230 kernel: devtmpfs: initialized Jun 25 18:42:39.080241 kernel: x86/mm: Memory block size: 128MB Jun 25 18:42:39.080251 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 25 18:42:39.080262 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:42:39.080271 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:42:39.080279 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:42:39.080290 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:42:39.080298 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:42:39.080306 kernel: audit: type=2000 audit(1719340957.028:1): state=initialized audit_enabled=0 res=1 Jun 25 18:42:39.080316 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:42:39.080324 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:42:39.080337 kernel: cpuidle: using governor menu Jun 25 18:42:39.080345 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:42:39.080353 kernel: dca service started, version 1.12.1 Jun 25 18:42:39.080364 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 25 18:42:39.080372 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:42:39.080381 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:42:39.080391 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:42:39.080399 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:42:39.080410 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:42:39.080421 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:42:39.080429 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:42:39.080440 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:42:39.080448 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:42:39.080456 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:42:39.080464 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:42:39.080474 kernel: ACPI: Interpreter enabled Jun 25 18:42:39.080484 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:42:39.080493 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:42:39.080504 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:42:39.080515 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 18:42:39.080523 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 25 18:42:39.080533 kernel: iommu: Default domain type: Translated Jun 25 18:42:39.080542 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:42:39.080550 kernel: efivars: Registered efivars operations Jun 25 18:42:39.080561 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:42:39.080569 kernel: PCI: System does not support PCI Jun 25 18:42:39.080577 kernel: vgaarb: loaded Jun 25 18:42:39.080589 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 25 18:42:39.080597 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:42:39.080607 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:42:39.080616 kernel: pnp: PnP ACPI init Jun 25 18:42:39.080624 kernel: pnp: PnP ACPI: found 3 devices Jun 25 18:42:39.080635 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:42:39.080643 kernel: NET: Registered PF_INET protocol family Jun 25 18:42:39.080652 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:42:39.080663 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 18:42:39.080673 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:42:39.080684 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:42:39.080692 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 18:42:39.080700 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 18:42:39.080711 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:39.080719 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 18:42:39.080728 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:42:39.080739 kernel: NET: Registered PF_XDP protocol family Jun 25 18:42:39.080747 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:42:39.080760 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 18:42:39.080768 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jun 25 18:42:39.080776 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:42:39.080787 kernel: Initialise system trusted keyrings Jun 25 18:42:39.080795 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 18:42:39.080804 kernel: Key type asymmetric registered Jun 25 18:42:39.080813 kernel: Asymmetric key parser 'x509' registered Jun 25 18:42:39.080821 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:42:39.080832 kernel: io scheduler mq-deadline registered Jun 25 18:42:39.080842 kernel: io scheduler kyber registered Jun 25 18:42:39.080857 kernel: io scheduler bfq registered Jun 25 18:42:39.080867 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:42:39.080875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:42:39.080885 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:42:39.080894 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 18:42:39.080905 kernel: i8042: PNP: No PS/2 controller found. Jun 25 18:42:39.081039 kernel: rtc_cmos 00:02: registered as rtc0 Jun 25 18:42:39.081130 kernel: rtc_cmos 00:02: setting system clock to 2024-06-25T18:42:38 UTC (1719340958) Jun 25 18:42:39.081215 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 25 18:42:39.081226 kernel: intel_pstate: CPU model not supported Jun 25 18:42:39.081237 kernel: efifb: probing for efifb Jun 25 18:42:39.081246 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:42:39.081254 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:42:39.081265 kernel: efifb: scrolling: redraw Jun 25 18:42:39.081273 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:42:39.081285 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:42:39.081294 kernel: fb0: EFI VGA frame buffer device Jun 25 18:42:39.081302 kernel: pstore: Using crash dump compression: deflate Jun 25 18:42:39.081313 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:42:39.081321 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:42:39.081329 kernel: Segment Routing with IPv6 Jun 25 18:42:39.081340 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:42:39.081349 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:42:39.081360 kernel: Key type dns_resolver registered Jun 25 18:42:39.081368 kernel: IPI shorthand broadcast: enabled Jun 25 18:42:39.081381 kernel: sched_clock: Marking stable (852006200, 47932100)->(1121090700, -221152400) Jun 25 18:42:39.081389 kernel: registered taskstats version 1 Jun 25 18:42:39.081397 kernel: Loading compiled-in X.509 certificates Jun 25 18:42:39.081408 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:42:39.081416 kernel: Key type .fscrypt registered Jun 25 18:42:39.081424 kernel: Key type fscrypt-provisioning registered Jun 25 18:42:39.081435 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:42:39.081443 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:42:39.081456 kernel: ima: No architecture policies found Jun 25 18:42:39.081464 kernel: clk: Disabling unused clocks Jun 25 18:42:39.081472 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:42:39.081483 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:42:39.081492 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:42:39.081501 kernel: Run /init as init process Jun 25 18:42:39.081510 kernel: with arguments: Jun 25 18:42:39.081518 kernel: /init Jun 25 18:42:39.081528 kernel: with environment: Jun 25 18:42:39.081539 kernel: HOME=/ Jun 25 18:42:39.081547 kernel: TERM=linux Jun 25 18:42:39.081558 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:42:39.081568 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:42:39.081581 systemd[1]: Detected virtualization microsoft. Jun 25 18:42:39.081590 systemd[1]: Detected architecture x86-64. Jun 25 18:42:39.081599 systemd[1]: Running in initrd. Jun 25 18:42:39.081609 systemd[1]: No hostname configured, using default hostname. Jun 25 18:42:39.081620 systemd[1]: Hostname set to . Jun 25 18:42:39.081632 systemd[1]: Initializing machine ID from random generator. Jun 25 18:42:39.081640 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:42:39.081649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:39.081660 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:39.081669 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:42:39.081680 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:42:39.081689 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:42:39.081701 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:42:39.081712 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:42:39.081721 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:42:39.081732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:39.081742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:39.081752 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:42:39.081764 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:42:39.081783 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:42:39.081800 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:42:39.081823 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:39.081846 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:39.081882 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:42:39.081904 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:42:39.081922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:39.081939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:39.081959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:39.081976 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:42:39.081996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:42:39.082017 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:42:39.082039 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:42:39.082058 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:42:39.082078 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:42:39.082096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:42:39.082136 systemd-journald[176]: Collecting audit messages is disabled. Jun 25 18:42:39.082180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:39.082196 systemd-journald[176]: Journal started Jun 25 18:42:39.082235 systemd-journald[176]: Runtime Journal (/run/log/journal/13f8c2f3939f42f98d18dea483303dde) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:42:39.091875 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:42:39.099473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:39.102704 systemd-modules-load[177]: Inserted module 'overlay' Jun 25 18:42:39.110044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:39.115653 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:42:39.140004 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:42:39.152906 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:42:39.155562 systemd-modules-load[177]: Inserted module 'br_netfilter' Jun 25 18:42:39.157929 kernel: Bridge firewalling registered Jun 25 18:42:39.166018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:42:39.170068 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:39.179722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:39.180091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:39.180405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:39.184981 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:39.189985 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:42:39.195065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:42:39.219350 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:39.223749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:39.230043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:39.243042 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:42:39.248978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:42:39.268870 dracut-cmdline[211]: dracut-dracut-053 Jun 25 18:42:39.273483 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:42:39.304554 systemd-resolved[213]: Positive Trust Anchors: Jun 25 18:42:39.304574 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:42:39.304628 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:42:39.309952 systemd-resolved[213]: Defaulting to hostname 'linux'. Jun 25 18:42:39.310809 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:42:39.330831 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:39.372876 kernel: SCSI subsystem initialized Jun 25 18:42:39.384870 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:42:39.397876 kernel: iscsi: registered transport (tcp) Jun 25 18:42:39.423790 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:42:39.423842 kernel: QLogic iSCSI HBA Driver Jun 25 18:42:39.458938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:39.468001 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:42:39.497378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:42:39.497440 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:42:39.500920 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:42:39.545878 kernel: raid6: avx512x4 gen() 18391 MB/s Jun 25 18:42:39.564865 kernel: raid6: avx512x2 gen() 18175 MB/s Jun 25 18:42:39.583862 kernel: raid6: avx512x1 gen() 18184 MB/s Jun 25 18:42:39.602866 kernel: raid6: avx2x4 gen() 18235 MB/s Jun 25 18:42:39.621862 kernel: raid6: avx2x2 gen() 18171 MB/s Jun 25 18:42:39.642618 kernel: raid6: avx2x1 gen() 13749 MB/s Jun 25 18:42:39.642647 kernel: raid6: using algorithm avx512x4 gen() 18391 MB/s Jun 25 18:42:39.664912 kernel: raid6: .... xor() 8063 MB/s, rmw enabled Jun 25 18:42:39.664944 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:42:39.691886 kernel: xor: automatically using best checksumming function avx Jun 25 18:42:39.861878 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:42:39.871052 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:39.884064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:39.900624 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jun 25 18:42:39.905082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:39.921245 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:42:39.932352 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jun 25 18:42:39.958579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:39.968022 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:42:40.009444 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:40.023999 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:42:40.055155 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:40.062518 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:40.070167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:40.076781 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:42:40.089077 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:42:40.102783 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:42:40.104625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:40.104812 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:40.108412 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:40.125879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:40.129001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:40.132221 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:40.152374 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:42:40.151932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:40.162832 kernel: AES CTR mode by8 optimization enabled Jun 25 18:42:40.165654 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:40.169192 kernel: hv_vmbus: Vmbus version:5.2 Jun 25 18:42:40.175121 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:40.177923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:40.189117 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:42:40.198881 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:42:40.200041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:40.239324 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:42:40.239353 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:42:40.239371 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:42:40.239395 kernel: scsi host1: storvsc_host_t Jun 25 18:42:40.239584 kernel: scsi host0: storvsc_host_t Jun 25 18:42:40.239735 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:42:40.239919 kernel: PTP clock support registered Jun 25 18:42:40.239938 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:42:40.245285 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:42:40.245332 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:42:40.247697 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:42:40.251236 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:42:40.253303 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:42:40.257878 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:42:41.155747 systemd-resolved[213]: Clock change detected. Flushing caches. Jun 25 18:42:41.164375 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:42:41.178815 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:42:41.178858 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:42:41.178879 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:42:41.180095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:41.192498 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:42:41.213347 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:42:41.215925 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:42:41.215948 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:42:41.235924 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:42:41.251442 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:42:41.251634 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:42:41.251807 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:42:41.251974 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:42:41.252143 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:41.252164 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:42:41.240432 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:41.367491 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: VF slot 1 added Jun 25 18:42:41.376304 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:42:41.381912 kernel: hv_pci 4f2700fb-3c17-498f-a28a-252480473288: PCI VMBus probing: Using version 0x10004 Jun 25 18:42:41.425464 kernel: hv_pci 4f2700fb-3c17-498f-a28a-252480473288: PCI host bridge to bus 3c17:00 Jun 25 18:42:41.425630 kernel: pci_bus 3c17:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 25 18:42:41.425821 kernel: pci_bus 3c17:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:42:41.425979 kernel: pci 3c17:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 25 18:42:41.426494 kernel: pci 3c17:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:41.426664 kernel: pci 3c17:00:02.0: enabling Extended Tags Jun 25 18:42:41.426843 kernel: pci 3c17:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3c17:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 25 18:42:41.427021 kernel: pci_bus 3c17:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:42:41.427173 kernel: pci 3c17:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 25 18:42:41.616630 kernel: mlx5_core 3c17:00:02.0: enabling device (0000 -> 0002) Jun 25 18:42:41.859974 kernel: mlx5_core 3c17:00:02.0: firmware version: 14.30.1284 Jun 25 18:42:41.860180 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Jun 25 18:42:41.860202 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: VF registering: eth1 Jun 25 18:42:41.861151 kernel: mlx5_core 3c17:00:02.0 eth1: joined to eth0 Jun 25 18:42:41.861378 kernel: mlx5_core 3c17:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 25 18:42:41.756291 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:42:41.832555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:42:41.875288 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (450) Jun 25 18:42:41.880287 kernel: mlx5_core 3c17:00:02.0 enP15383s1: renamed from eth1 Jun 25 18:42:41.897835 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:42:41.909152 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:42:41.912691 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:42:41.931402 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:42:41.944327 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:41.952302 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:42.958658 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:42:42.959113 disk-uuid[604]: The operation has completed successfully. Jun 25 18:42:43.037817 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:42:43.037943 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:42:43.061418 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:42:43.067448 sh[717]: Success Jun 25 18:42:43.100445 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:42:43.319413 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:42:43.336390 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:42:43.336729 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:42:43.355283 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:42:43.355323 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:43.360872 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:42:43.363765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:42:43.366275 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:42:43.758676 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:42:43.764953 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:42:43.774425 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:42:43.781397 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:42:43.791591 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:43.797587 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:43.797635 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:43.821301 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:43.830412 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:42:43.837292 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:43.843702 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:42:43.855406 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:42:43.884079 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:43.894522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:42:43.916212 systemd-networkd[901]: lo: Link UP Jun 25 18:42:43.916222 systemd-networkd[901]: lo: Gained carrier Jun 25 18:42:43.918251 systemd-networkd[901]: Enumeration completed Jun 25 18:42:43.918544 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:42:43.921207 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:43.921213 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:42:43.927684 systemd[1]: Reached target network.target - Network. Jun 25 18:42:43.985292 kernel: mlx5_core 3c17:00:02.0 enP15383s1: Link up Jun 25 18:42:44.016306 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: Data path switched to VF: enP15383s1 Jun 25 18:42:44.016633 systemd-networkd[901]: enP15383s1: Link UP Jun 25 18:42:44.016805 systemd-networkd[901]: eth0: Link UP Jun 25 18:42:44.017200 systemd-networkd[901]: eth0: Gained carrier Jun 25 18:42:44.017215 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:44.029131 systemd-networkd[901]: enP15383s1: Gained carrier Jun 25 18:42:44.068327 systemd-networkd[901]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:42:44.661456 ignition[853]: Ignition 2.19.0 Jun 25 18:42:44.661469 ignition[853]: Stage: fetch-offline Jun 25 18:42:44.663143 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:44.661521 ignition[853]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.661532 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.661656 ignition[853]: parsed url from cmdline: "" Jun 25 18:42:44.661662 ignition[853]: no config URL provided Jun 25 18:42:44.661669 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:44.681383 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:42:44.661681 ignition[853]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:44.661688 ignition[853]: failed to fetch config: resource requires networking Jun 25 18:42:44.662222 ignition[853]: Ignition finished successfully Jun 25 18:42:44.699554 ignition[910]: Ignition 2.19.0 Jun 25 18:42:44.699567 ignition[910]: Stage: fetch Jun 25 18:42:44.699797 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.699811 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.699913 ignition[910]: parsed url from cmdline: "" Jun 25 18:42:44.699916 ignition[910]: no config URL provided Jun 25 18:42:44.699921 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:42:44.699930 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:42:44.699953 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:42:44.791834 ignition[910]: GET result: OK Jun 25 18:42:44.792016 ignition[910]: config has been read from IMDS userdata Jun 25 18:42:44.792068 ignition[910]: parsing config with SHA512: efc785744051ed65d2c1d45e6107f5231a00cad6184542730f95a8f5da3c4935b8e94c8963fed9cad67221aa8b3be71b657ff2be7225ad437ebb27b7e888496b Jun 25 18:42:44.796589 unknown[910]: fetched base config from "system" Jun 25 18:42:44.797103 ignition[910]: fetch: fetch complete Jun 25 18:42:44.796596 unknown[910]: fetched base config from "system" Jun 25 18:42:44.797110 ignition[910]: fetch: fetch passed Jun 25 18:42:44.796603 unknown[910]: fetched user config from "azure" Jun 25 18:42:44.797163 ignition[910]: Ignition finished successfully Jun 25 18:42:44.798783 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:42:44.818448 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:42:44.834349 ignition[917]: Ignition 2.19.0 Jun 25 18:42:44.834360 ignition[917]: Stage: kargs Jun 25 18:42:44.834581 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.834593 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.835467 ignition[917]: kargs: kargs passed Jun 25 18:42:44.835513 ignition[917]: Ignition finished successfully Jun 25 18:42:44.847660 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:42:44.858424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:42:44.873809 ignition[924]: Ignition 2.19.0 Jun 25 18:42:44.873819 ignition[924]: Stage: disks Jun 25 18:42:44.875751 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:42:44.874028 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:44.879018 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:44.874041 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:44.883345 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:42:44.874891 ignition[924]: disks: disks passed Jun 25 18:42:44.874932 ignition[924]: Ignition finished successfully Jun 25 18:42:44.903521 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:42:44.906445 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:42:44.914699 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:42:44.928404 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:42:44.986193 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:42:44.991495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:42:45.001433 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:42:45.106350 kernel: EXT4-fs (sda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:42:45.106922 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:42:45.110139 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:42:45.162386 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:45.168628 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:42:45.180284 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944) Jun 25 18:42:45.184915 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:42:45.194680 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:45.194719 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:45.194743 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:45.194765 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:45.197959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:42:45.197998 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:45.211283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:45.213882 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:42:45.231559 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:42:45.592435 systemd-networkd[901]: enP15383s1: Gained IPv6LL Jun 25 18:42:45.656401 systemd-networkd[901]: eth0: Gained IPv6LL Jun 25 18:42:45.761986 coreos-metadata[946]: Jun 25 18:42:45.761 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:42:45.769521 coreos-metadata[946]: Jun 25 18:42:45.769 INFO Fetch successful Jun 25 18:42:45.772619 coreos-metadata[946]: Jun 25 18:42:45.769 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:42:45.784332 coreos-metadata[946]: Jun 25 18:42:45.784 INFO Fetch successful Jun 25 18:42:45.800699 coreos-metadata[946]: Jun 25 18:42:45.800 INFO wrote hostname ci-4012.0.0-a-d50f1c7422 to /sysroot/etc/hostname Jun 25 18:42:45.806786 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:45.934480 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:42:45.955794 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:42:45.975656 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:42:45.980696 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:42:46.627027 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:46.640417 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:42:46.645404 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:42:46.658088 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:42:46.664534 kernel: BTRFS info (device sda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:46.686458 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:42:46.691975 ignition[1062]: INFO : Ignition 2.19.0 Jun 25 18:42:46.691975 ignition[1062]: INFO : Stage: mount Jun 25 18:42:46.691975 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:46.691975 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:46.695151 ignition[1062]: INFO : mount: mount passed Jun 25 18:42:46.695151 ignition[1062]: INFO : Ignition finished successfully Jun 25 18:42:46.693572 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:42:46.713547 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:42:46.733446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:42:46.746285 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1075) Jun 25 18:42:46.746319 kernel: BTRFS info (device sda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:42:46.750281 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:42:46.754732 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:42:46.761298 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:42:46.762580 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:42:46.787891 ignition[1091]: INFO : Ignition 2.19.0 Jun 25 18:42:46.787891 ignition[1091]: INFO : Stage: files Jun 25 18:42:46.792493 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:46.792493 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:46.792493 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:42:46.802971 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:42:46.802971 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:42:46.871807 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:42:46.876263 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:42:46.876263 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:42:46.876263 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:46.876263 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:42:46.872331 unknown[1091]: wrote ssh authorized keys file for user: core Jun 25 18:42:46.981583 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:42:47.082189 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:42:47.088548 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 25 18:42:47.719490 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:42:48.019915 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:42:48.019915 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:42:48.116841 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:42:48.125599 ignition[1091]: INFO : files: files passed Jun 25 18:42:48.125599 ignition[1091]: INFO : Ignition finished successfully Jun 25 18:42:48.119148 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:42:48.143518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:42:48.164574 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:42:48.168027 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:42:48.168127 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:42:48.181559 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:48.181559 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:48.185838 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:42:48.182614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:48.201445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:42:48.211440 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:42:48.232482 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:42:48.232585 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:42:48.241898 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:42:48.244591 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:42:48.250075 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:42:48.265685 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:42:48.279765 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:48.288518 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:42:48.301008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:48.304280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:48.313399 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:42:48.318218 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:42:48.318399 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:42:48.327513 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:42:48.333151 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:42:48.335588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:42:48.343487 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:42:48.349939 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:42:48.350097 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:42:48.350542 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:42:48.351103 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:42:48.352087 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:42:48.352545 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:42:48.352950 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:42:48.353085 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:42:48.354288 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:48.354744 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:48.355145 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:42:48.374798 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:48.381406 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:42:48.381562 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:42:48.433833 ignition[1145]: INFO : Ignition 2.19.0 Jun 25 18:42:48.433833 ignition[1145]: INFO : Stage: umount Jun 25 18:42:48.433833 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:42:48.433833 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:42:48.433833 ignition[1145]: INFO : umount: umount passed Jun 25 18:42:48.433833 ignition[1145]: INFO : Ignition finished successfully Jun 25 18:42:48.387558 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:42:48.387702 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:42:48.393712 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:42:48.393851 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:42:48.399397 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:42:48.399533 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:42:48.412885 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:42:48.465827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:42:48.472869 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:42:48.476253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:48.479939 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:42:48.480042 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:42:48.493332 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:42:48.493452 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:42:48.501246 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:42:48.501417 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:42:48.509934 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:42:48.509988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:42:48.512504 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:42:48.512544 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:42:48.512912 systemd[1]: Stopped target network.target - Network. Jun 25 18:42:48.513313 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:42:48.513348 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:42:48.513773 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:42:48.514178 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:42:48.525011 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:48.551068 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:42:48.553708 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:42:48.561092 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:42:48.561143 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:42:48.568665 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:42:48.568718 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:42:48.576426 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:42:48.576493 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:42:48.581683 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:42:48.581734 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:42:48.584953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:42:48.590584 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:42:48.594430 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:42:48.594981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:42:48.595061 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:42:48.599588 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:42:48.599666 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:42:48.605015 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:42:48.607778 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:42:48.615364 systemd-networkd[901]: eth0: DHCPv6 lease lost Jun 25 18:42:48.618196 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:42:48.618320 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:42:48.624389 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:42:48.624480 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:42:48.631140 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:42:48.631201 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:48.652420 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:42:48.656526 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:42:48.656592 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:42:48.662565 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:42:48.662618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:48.668524 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:42:48.668565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:48.668660 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:42:48.668697 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:48.669163 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:48.707913 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:42:48.708082 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:48.714509 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:42:48.714592 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:48.737680 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: Data path switched from VF: enP15383s1 Jun 25 18:42:48.720748 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:42:48.720793 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:48.729738 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:42:48.729779 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:42:48.730349 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:42:48.730383 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:42:48.731224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:42:48.731261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:42:48.752475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:42:48.760494 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:42:48.760557 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:48.766458 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:42:48.766504 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:48.773495 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:42:48.773548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:48.798156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:48.798215 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:48.807221 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:42:48.807357 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:42:48.812674 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:42:48.812760 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:42:48.818755 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:42:48.833423 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:42:48.841710 systemd[1]: Switching root. Jun 25 18:42:48.924185 systemd-journald[176]: Journal stopped Jun 25 18:42:55.405185 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jun 25 18:42:55.405226 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:42:55.405244 kernel: SELinux: policy capability open_perms=1 Jun 25 18:42:55.405260 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:42:55.405282 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:42:55.405296 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:42:55.405312 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:42:55.405330 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:42:55.405344 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:42:55.405359 kernel: audit: type=1403 audit(1719340972.152:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:42:55.405375 systemd[1]: Successfully loaded SELinux policy in 260.772ms. Jun 25 18:42:55.405392 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.049ms. Jun 25 18:42:55.405410 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:42:55.405426 systemd[1]: Detected virtualization microsoft. Jun 25 18:42:55.405446 systemd[1]: Detected architecture x86-64. Jun 25 18:42:55.405462 systemd[1]: Detected first boot. Jun 25 18:42:55.405479 systemd[1]: Hostname set to . Jun 25 18:42:55.405495 systemd[1]: Initializing machine ID from random generator. Jun 25 18:42:55.405512 zram_generator::config[1187]: No configuration found. Jun 25 18:42:55.405532 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:42:55.405548 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:42:55.405564 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:42:55.405581 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:42:55.405598 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:42:55.405615 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:42:55.405633 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:42:55.405652 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:42:55.405669 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:42:55.405686 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:42:55.405703 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:42:55.405720 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:42:55.405737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:42:55.405754 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:42:55.405771 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:42:55.405790 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:42:55.405807 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:42:55.405824 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:42:55.405841 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:42:55.405858 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:42:55.405875 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:42:55.405897 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:42:55.405914 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:42:55.405934 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:42:55.405952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:42:55.405969 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:42:55.405989 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:42:55.406006 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:42:55.406023 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:42:55.406041 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:42:55.406060 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:42:55.406078 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:42:55.406097 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:42:55.406114 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:42:55.406132 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:42:55.406152 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:42:55.406170 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:42:55.406188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:55.406206 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:42:55.406224 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:42:55.406241 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:42:55.406259 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:42:55.407229 systemd[1]: Reached target machines.target - Containers. Jun 25 18:42:55.407259 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:42:55.407298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:42:55.407317 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:42:55.407336 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:42:55.407353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:42:55.407371 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:42:55.407389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:42:55.407407 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:42:55.407425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:42:55.407447 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:42:55.407465 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:42:55.407483 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:42:55.407501 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:42:55.407518 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:42:55.407536 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:42:55.407556 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:42:55.407574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:42:55.407594 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:42:55.407640 systemd-journald[1292]: Collecting audit messages is disabled. Jun 25 18:42:55.407677 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:42:55.407695 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:42:55.407716 systemd[1]: Stopped verity-setup.service. Jun 25 18:42:55.407735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:55.407753 systemd-journald[1292]: Journal started Jun 25 18:42:55.407788 systemd-journald[1292]: Runtime Journal (/run/log/journal/fd2ab30d792a4eaf9b6721a818f276f1) is 8.0M, max 158.8M, 150.8M free. Jun 25 18:42:54.560060 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:42:54.704258 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 18:42:54.704674 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:42:55.417227 kernel: loop: module loaded Jun 25 18:42:55.417313 kernel: ACPI: bus type drm_connector registered Jun 25 18:42:55.417343 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:42:55.422570 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:42:55.425887 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:42:55.431464 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:42:55.434312 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:42:55.437446 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:42:55.440524 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:42:55.443503 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:42:55.447184 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:42:55.451124 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:42:55.451306 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:42:55.455027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:42:55.455192 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:42:55.458676 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:42:55.458844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:42:55.462432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:42:55.462592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:42:55.466963 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:42:55.467333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:42:55.472898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:42:55.483976 kernel: fuse: init (API version 7.39) Jun 25 18:42:55.476764 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:42:55.476947 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:42:55.480462 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:42:55.485248 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:42:55.504697 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:42:55.516367 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:42:55.528339 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:42:55.535737 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:42:55.535868 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:42:55.539929 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:42:55.546664 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:42:55.551103 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:42:55.553994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:42:55.562802 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:42:55.569388 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:42:55.572953 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:42:55.578440 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:42:55.581859 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:42:55.586169 systemd-journald[1292]: Time spent on flushing to /var/log/journal/fd2ab30d792a4eaf9b6721a818f276f1 is 30.059ms for 954 entries. Jun 25 18:42:55.586169 systemd-journald[1292]: System Journal (/var/log/journal/fd2ab30d792a4eaf9b6721a818f276f1) is 8.0M, max 2.6G, 2.6G free. Jun 25 18:42:55.631850 systemd-journald[1292]: Received client request to flush runtime journal. Jun 25 18:42:55.585943 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:42:55.598689 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:42:55.611492 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:42:55.618493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:42:55.624000 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:42:55.627545 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:42:55.632717 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:42:55.636882 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:42:55.654571 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:42:55.660938 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:42:55.669164 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:42:55.681495 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:42:55.687883 udevadm[1332]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:42:55.704294 kernel: loop0: detected capacity change from 0 to 80568 Jun 25 18:42:55.753231 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:42:55.767772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:42:55.768786 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:42:55.819076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:42:55.826407 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jun 25 18:42:55.826432 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jun 25 18:42:55.834144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:42:55.844432 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:42:56.011731 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:42:56.022735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:42:56.039057 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jun 25 18:42:56.039081 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jun 25 18:42:56.043232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:42:56.263298 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:42:56.284290 kernel: loop1: detected capacity change from 0 to 211296 Jun 25 18:42:56.330294 kernel: loop2: detected capacity change from 0 to 62456 Jun 25 18:42:56.783300 kernel: loop3: detected capacity change from 0 to 139760 Jun 25 18:42:57.139165 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:42:57.148459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:42:57.175396 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Jun 25 18:42:57.332523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:42:57.349187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:42:57.421408 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1371) Jun 25 18:42:57.420523 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:42:57.452545 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:42:57.508315 kernel: hv_vmbus: registering driver hv_balloon Jun 25 18:42:57.511410 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 18:42:57.523902 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 18:42:57.523991 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 18:42:57.524719 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:42:57.535379 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:42:57.535444 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 18:42:57.555180 kernel: Console: switching to colour dummy device 80x25 Jun 25 18:42:57.561683 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:42:57.582925 kernel: loop4: detected capacity change from 0 to 80568 Jun 25 18:42:57.602308 kernel: loop5: detected capacity change from 0 to 211296 Jun 25 18:42:57.694615 kernel: loop6: detected capacity change from 0 to 62456 Jun 25 18:42:57.714287 kernel: loop7: detected capacity change from 0 to 139760 Jun 25 18:42:57.715419 systemd-networkd[1359]: lo: Link UP Jun 25 18:42:57.715431 systemd-networkd[1359]: lo: Gained carrier Jun 25 18:42:57.726919 systemd-networkd[1359]: Enumeration completed Jun 25 18:42:57.727035 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:42:57.728518 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:57.728526 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:42:57.735427 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:42:57.740548 (sd-merge)[1392]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 18:42:57.743372 (sd-merge)[1392]: Merged extensions into '/usr'. Jun 25 18:42:57.769559 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:42:57.769573 systemd[1]: Reloading... Jun 25 18:42:57.831900 kernel: mlx5_core 3c17:00:02.0 enP15383s1: Link up Jun 25 18:42:57.861976 kernel: hv_netvsc 002248a0-8c7d-0022-48a0-8c7d002248a0 eth0: Data path switched to VF: enP15383s1 Jun 25 18:42:57.863816 systemd-networkd[1359]: enP15383s1: Link UP Jun 25 18:42:57.865061 systemd-networkd[1359]: eth0: Link UP Jun 25 18:42:57.867298 systemd-networkd[1359]: eth0: Gained carrier Jun 25 18:42:57.867424 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:42:57.886703 systemd-networkd[1359]: enP15383s1: Gained carrier Jun 25 18:42:57.910329 zram_generator::config[1429]: No configuration found. Jun 25 18:42:57.931294 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1356) Jun 25 18:42:57.955382 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:42:57.985494 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 25 18:42:58.205660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:42:58.303179 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:42:58.307793 systemd[1]: Reloading finished in 537 ms. Jun 25 18:42:58.344462 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:42:58.377569 systemd[1]: Starting ensure-sysext.service... Jun 25 18:42:58.383402 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:42:58.389542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:42:58.399533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:58.408398 systemd[1]: Reloading requested from client PID 1520 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:42:58.408409 systemd[1]: Reloading... Jun 25 18:42:58.443608 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:42:58.444114 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:42:58.445412 systemd-tmpfiles[1522]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:42:58.446042 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jun 25 18:42:58.446132 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jun 25 18:42:58.452571 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:42:58.453027 systemd-tmpfiles[1522]: Skipping /boot Jun 25 18:42:58.469520 systemd-tmpfiles[1522]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:42:58.469655 systemd-tmpfiles[1522]: Skipping /boot Jun 25 18:42:58.509295 zram_generator::config[1553]: No configuration found. Jun 25 18:42:58.635108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:42:58.711540 systemd[1]: Reloading finished in 302 ms. Jun 25 18:42:58.736832 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:42:58.741515 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:42:58.745606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:42:58.745789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:58.761772 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:42:58.766645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:42:58.773025 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:42:58.779544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:42:58.789532 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:42:58.798923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:42:58.805645 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:42:58.819583 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.819953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:42:58.827632 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:42:58.833608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:42:58.838800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:42:58.849791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:42:58.855538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:42:58.855796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.859491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:42:58.859687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:42:58.867948 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:42:58.873478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:42:58.873666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:42:58.892642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.892915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:42:58.895705 lvm[1631]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:42:58.903444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:42:58.909306 systemd-networkd[1359]: eth0: Gained IPv6LL Jun 25 18:42:58.916140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:42:58.922351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:42:58.922721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.925131 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:42:58.933472 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:42:58.938132 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:42:58.938453 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:42:58.944549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:42:58.945041 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:42:58.945694 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:42:58.949842 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:42:58.953531 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:42:58.956365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:42:58.963604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:42:58.963802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:42:58.968629 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:58.969208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:42:58.972212 lvm[1651]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:42:58.979553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:42:58.991649 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:42:59.008044 systemd-resolved[1620]: Positive Trust Anchors: Jun 25 18:42:59.008066 systemd-resolved[1620]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:42:59.008118 systemd-resolved[1620]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:42:59.010473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:42:59.014080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:42:59.014242 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:42:59.015607 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:42:59.015849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:42:59.016959 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:42:59.017773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:42:59.017895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:42:59.018577 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:42:59.018690 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:42:59.019523 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:42:59.019634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:42:59.023045 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:42:59.026587 systemd[1]: Finished ensure-sysext.service. Jun 25 18:42:59.032490 systemd-resolved[1620]: Using system hostname 'ci-4012.0.0-a-d50f1c7422'. Jun 25 18:42:59.034758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:42:59.037928 systemd[1]: Reached target network.target - Network. Jun 25 18:42:59.038017 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:42:59.039023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:42:59.048556 augenrules[1664]: No rules Jun 25 18:42:59.049082 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:42:59.277163 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:42:59.281772 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:42:59.374119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:42:59.800479 systemd-networkd[1359]: enP15383s1: Gained IPv6LL Jun 25 18:43:01.993217 ldconfig[1317]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:43:02.003607 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:43:02.011451 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:43:02.023069 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:43:02.026430 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:02.029572 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:43:02.032879 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:43:02.036448 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:43:02.039603 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:43:02.042933 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:43:02.046314 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:43:02.046358 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:02.048674 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:02.110014 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:43:02.114933 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:43:02.127194 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:43:02.130928 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:43:02.134035 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:02.136703 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:02.139469 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:02.139498 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:02.147519 systemd[1]: Starting chronyd.service - NTP client/server... Jun 25 18:43:02.152398 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:43:02.164383 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:43:02.173440 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:43:02.181372 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:43:02.187591 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:43:02.190482 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:43:02.198371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:02.203644 jq[1685]: false Jun 25 18:43:02.204260 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:43:02.210162 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:43:02.213946 (chronyd)[1679]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 25 18:43:02.215372 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:43:02.222445 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:43:02.233491 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:43:02.245582 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:43:02.245867 chronyd[1699]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 25 18:43:02.248975 chronyd[1699]: Timezone right/UTC failed leap second check, ignoring Jun 25 18:43:02.249341 chronyd[1699]: Loaded seccomp filter (level 2) Jun 25 18:43:02.252328 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:43:02.252877 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:43:02.258148 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:43:02.264412 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:43:02.272924 systemd[1]: Started chronyd.service - NTP client/server. Jun 25 18:43:02.283299 extend-filesystems[1686]: Found loop4 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found loop5 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found loop6 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found loop7 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found sda Jun 25 18:43:02.283299 extend-filesystems[1686]: Found sda1 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found sda2 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found sda3 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found usr Jun 25 18:43:02.283299 extend-filesystems[1686]: Found sda4 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found sda6 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found sda7 Jun 25 18:43:02.283299 extend-filesystems[1686]: Found sda9 Jun 25 18:43:02.283299 extend-filesystems[1686]: Checking size of /dev/sda9 Jun 25 18:43:02.294047 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:43:02.294340 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:43:02.320770 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:43:02.321242 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:43:02.329223 jq[1704]: true Jun 25 18:43:02.332775 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:43:02.333371 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:43:02.387601 extend-filesystems[1686]: Old size kept for /dev/sda9 Jun 25 18:43:02.387601 extend-filesystems[1686]: Found sr0 Jun 25 18:43:02.380044 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:43:02.383904 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:43:02.384130 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:43:02.393744 dbus-daemon[1682]: [system] SELinux support is enabled Jun 25 18:43:02.408665 jq[1718]: true Jun 25 18:43:02.395383 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:43:02.408379 (ntainerd)[1716]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:43:02.415462 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:43:02.416841 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:43:02.427403 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:43:02.427429 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:43:02.453126 tar[1714]: linux-amd64/helm Jun 25 18:43:02.490043 update_engine[1701]: I0625 18:43:02.488946 1701 main.cc:92] Flatcar Update Engine starting Jun 25 18:43:02.491358 systemd-logind[1697]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:43:02.499592 systemd-logind[1697]: New seat seat0. Jun 25 18:43:02.506570 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:43:02.513923 update_engine[1701]: I0625 18:43:02.513566 1701 update_check_scheduler.cc:74] Next update check in 11m5s Jun 25 18:43:02.517435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:43:02.525880 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:43:02.603482 coreos-metadata[1681]: Jun 25 18:43:02.603 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:43:02.611994 coreos-metadata[1681]: Jun 25 18:43:02.611 INFO Fetch successful Jun 25 18:43:02.611994 coreos-metadata[1681]: Jun 25 18:43:02.611 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 18:43:02.617683 coreos-metadata[1681]: Jun 25 18:43:02.617 INFO Fetch successful Jun 25 18:43:02.619681 coreos-metadata[1681]: Jun 25 18:43:02.619 INFO Fetching http://168.63.129.16/machine/9e92155f-3f14-4a62-ad8c-e75c95013df0/b785975e%2D93fb%2D4c74%2Dad88%2D056a710a519d.%5Fci%2D4012.0.0%2Da%2Dd50f1c7422?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 18:43:02.625334 coreos-metadata[1681]: Jun 25 18:43:02.623 INFO Fetch successful Jun 25 18:43:02.625334 coreos-metadata[1681]: Jun 25 18:43:02.623 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:43:02.645791 coreos-metadata[1681]: Jun 25 18:43:02.641 INFO Fetch successful Jun 25 18:43:02.667293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1758) Jun 25 18:43:02.661942 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:43:02.667476 bash[1764]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:43:02.669261 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:43:02.741567 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:43:02.760169 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:43:02.918185 locksmithd[1747]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:43:02.950810 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:43:02.982427 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:43:02.994367 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:43:03.005577 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 18:43:03.030690 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:43:03.032007 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:43:03.044708 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:43:03.082509 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 18:43:03.193162 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:43:03.206678 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:43:03.218610 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:43:03.221747 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:43:03.422115 tar[1714]: linux-amd64/LICENSE Jun 25 18:43:03.422504 tar[1714]: linux-amd64/README.md Jun 25 18:43:03.438569 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:43:03.534348 containerd[1716]: time="2024-06-25T18:43:03.534232300Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:43:03.571153 containerd[1716]: time="2024-06-25T18:43:03.571104700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:43:03.571306 containerd[1716]: time="2024-06-25T18:43:03.571236300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:03.573810 containerd[1716]: time="2024-06-25T18:43:03.573140600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:03.573810 containerd[1716]: time="2024-06-25T18:43:03.573178400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:03.573810 containerd[1716]: time="2024-06-25T18:43:03.573468300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:03.573810 containerd[1716]: time="2024-06-25T18:43:03.573498200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:43:03.573810 containerd[1716]: time="2024-06-25T18:43:03.573637100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:03.573810 containerd[1716]: time="2024-06-25T18:43:03.573703400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:03.573810 containerd[1716]: time="2024-06-25T18:43:03.573719900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:03.573810 containerd[1716]: time="2024-06-25T18:43:03.573795300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:03.574138 containerd[1716]: time="2024-06-25T18:43:03.574038300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:03.574138 containerd[1716]: time="2024-06-25T18:43:03.574062700Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:43:03.574138 containerd[1716]: time="2024-06-25T18:43:03.574077100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:03.574258 containerd[1716]: time="2024-06-25T18:43:03.574232500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:03.574324 containerd[1716]: time="2024-06-25T18:43:03.574258000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:43:03.574360 containerd[1716]: time="2024-06-25T18:43:03.574344800Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:43:03.574410 containerd[1716]: time="2024-06-25T18:43:03.574362000Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:43:03.585809 containerd[1716]: time="2024-06-25T18:43:03.585772700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:43:03.585809 containerd[1716]: time="2024-06-25T18:43:03.585804100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:43:03.585951 containerd[1716]: time="2024-06-25T18:43:03.585824200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:43:03.585951 containerd[1716]: time="2024-06-25T18:43:03.585871300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:43:03.585951 containerd[1716]: time="2024-06-25T18:43:03.585892700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:43:03.585951 containerd[1716]: time="2024-06-25T18:43:03.585907100Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:43:03.585951 containerd[1716]: time="2024-06-25T18:43:03.585922400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:43:03.586115 containerd[1716]: time="2024-06-25T18:43:03.586047900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:43:03.586115 containerd[1716]: time="2024-06-25T18:43:03.586068400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:43:03.586115 containerd[1716]: time="2024-06-25T18:43:03.586086900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:43:03.586115 containerd[1716]: time="2024-06-25T18:43:03.586105400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:43:03.586255 containerd[1716]: time="2024-06-25T18:43:03.586124400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:43:03.586255 containerd[1716]: time="2024-06-25T18:43:03.586146400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:43:03.586255 containerd[1716]: time="2024-06-25T18:43:03.586165100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:43:03.586255 containerd[1716]: time="2024-06-25T18:43:03.586187300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:43:03.586255 containerd[1716]: time="2024-06-25T18:43:03.586206900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:43:03.586255 containerd[1716]: time="2024-06-25T18:43:03.586224800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:43:03.586255 containerd[1716]: time="2024-06-25T18:43:03.586242500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:43:03.586487 containerd[1716]: time="2024-06-25T18:43:03.586259400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:43:03.586487 containerd[1716]: time="2024-06-25T18:43:03.586397900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:43:03.586764 containerd[1716]: time="2024-06-25T18:43:03.586738300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:43:03.587000 containerd[1716]: time="2024-06-25T18:43:03.586870200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587000 containerd[1716]: time="2024-06-25T18:43:03.586895200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:43:03.587000 containerd[1716]: time="2024-06-25T18:43:03.586942900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:43:03.587208 containerd[1716]: time="2024-06-25T18:43:03.587150000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587208 containerd[1716]: time="2024-06-25T18:43:03.587174100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587382 containerd[1716]: time="2024-06-25T18:43:03.587193100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587382 containerd[1716]: time="2024-06-25T18:43:03.587325300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587382 containerd[1716]: time="2024-06-25T18:43:03.587345500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587382 containerd[1716]: time="2024-06-25T18:43:03.587363900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587583 containerd[1716]: time="2024-06-25T18:43:03.587559200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587639 containerd[1716]: time="2024-06-25T18:43:03.587592000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.587639 containerd[1716]: time="2024-06-25T18:43:03.587614900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:43:03.588873 containerd[1716]: time="2024-06-25T18:43:03.587771200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.588873 containerd[1716]: time="2024-06-25T18:43:03.587797600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.588873 containerd[1716]: time="2024-06-25T18:43:03.587815600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.588873 containerd[1716]: time="2024-06-25T18:43:03.587833800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.588873 containerd[1716]: time="2024-06-25T18:43:03.587855900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.588873 containerd[1716]: time="2024-06-25T18:43:03.587874800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.588873 containerd[1716]: time="2024-06-25T18:43:03.587895100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.588873 containerd[1716]: time="2024-06-25T18:43:03.587912700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:43:03.589237 containerd[1716]: time="2024-06-25T18:43:03.588249800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:43:03.589237 containerd[1716]: time="2024-06-25T18:43:03.588389700Z" level=info msg="Connect containerd service" Jun 25 18:43:03.589237 containerd[1716]: time="2024-06-25T18:43:03.588422600Z" level=info msg="using legacy CRI server" Jun 25 18:43:03.589237 containerd[1716]: time="2024-06-25T18:43:03.588431700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:43:03.589237 containerd[1716]: time="2024-06-25T18:43:03.588545700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:43:03.589580 containerd[1716]: time="2024-06-25T18:43:03.589240700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:43:03.589580 containerd[1716]: time="2024-06-25T18:43:03.589404800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:43:03.589580 containerd[1716]: time="2024-06-25T18:43:03.589434300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:43:03.589580 containerd[1716]: time="2024-06-25T18:43:03.589505300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:43:03.589580 containerd[1716]: time="2024-06-25T18:43:03.589527100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:43:03.589580 containerd[1716]: time="2024-06-25T18:43:03.589467100Z" level=info msg="Start subscribing containerd event" Jun 25 18:43:03.589787 containerd[1716]: time="2024-06-25T18:43:03.589604700Z" level=info msg="Start recovering state" Jun 25 18:43:03.589787 containerd[1716]: time="2024-06-25T18:43:03.589677300Z" level=info msg="Start event monitor" Jun 25 18:43:03.589787 containerd[1716]: time="2024-06-25T18:43:03.589694600Z" level=info msg="Start snapshots syncer" Jun 25 18:43:03.589787 containerd[1716]: time="2024-06-25T18:43:03.589706200Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:43:03.589787 containerd[1716]: time="2024-06-25T18:43:03.589715400Z" level=info msg="Start streaming server" Jun 25 18:43:03.593602 containerd[1716]: time="2024-06-25T18:43:03.590163900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:43:03.593602 containerd[1716]: time="2024-06-25T18:43:03.590230300Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:43:03.593602 containerd[1716]: time="2024-06-25T18:43:03.592303000Z" level=info msg="containerd successfully booted in 0.061559s" Jun 25 18:43:03.590472 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:43:03.693174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:03.697752 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:43:03.698206 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:03.701585 systemd[1]: Startup finished in 794ms (firmware) + 29.372s (loader) + 996ms (kernel) + 12.291s (initrd) + 11.808s (userspace) = 55.263s. Jun 25 18:43:04.164341 login[1824]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:43:04.166959 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:43:04.175887 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:43:04.183735 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:43:04.186520 systemd-logind[1697]: New session 2 of user core. Jun 25 18:43:04.191872 systemd-logind[1697]: New session 1 of user core. Jun 25 18:43:04.203376 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:43:04.210624 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:43:04.225494 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:04.392565 systemd[1854]: Queued start job for default target default.target. Jun 25 18:43:04.399198 systemd[1854]: Created slice app.slice - User Application Slice. Jun 25 18:43:04.399350 systemd[1854]: Reached target paths.target - Paths. Jun 25 18:43:04.399372 systemd[1854]: Reached target timers.target - Timers. Jun 25 18:43:04.401576 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:43:04.422297 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:43:04.423094 systemd[1854]: Reached target sockets.target - Sockets. Jun 25 18:43:04.423116 systemd[1854]: Reached target basic.target - Basic System. Jun 25 18:43:04.424205 systemd[1854]: Reached target default.target - Main User Target. Jun 25 18:43:04.424337 systemd[1854]: Startup finished in 191ms. Jun 25 18:43:04.424663 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:43:04.430512 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:43:04.432539 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:43:04.471237 kubelet[1843]: E0625 18:43:04.471177 1843 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:04.473719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:04.473905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:04.474381 systemd[1]: kubelet.service: Consumed 1.017s CPU time. Jun 25 18:43:05.163433 waagent[1822]: 2024-06-25T18:43:05.163331Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.163745Z INFO Daemon Daemon OS: flatcar 4012.0.0 Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.164807Z INFO Daemon Daemon Python: 3.11.9 Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.165440Z INFO Daemon Daemon Run daemon Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.165798Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4012.0.0' Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.166196Z INFO Daemon Daemon Using waagent for provisioning Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.167265Z INFO Daemon Daemon Activate resource disk Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.167599Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.171519Z INFO Daemon Daemon Found device: None Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.172342Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.172801Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.174258Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:43:05.201256 waagent[1822]: 2024-06-25T18:43:05.174878Z INFO Daemon Daemon Running default provisioning handler Jun 25 18:43:05.204348 waagent[1822]: 2024-06-25T18:43:05.204250Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 25 18:43:05.211468 waagent[1822]: 2024-06-25T18:43:05.211412Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 18:43:05.216382 waagent[1822]: 2024-06-25T18:43:05.216243Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 18:43:05.220921 waagent[1822]: 2024-06-25T18:43:05.216422Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 18:43:05.331001 waagent[1822]: 2024-06-25T18:43:05.330493Z INFO Daemon Daemon Successfully mounted dvd Jun 25 18:43:05.344913 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 18:43:05.347556 waagent[1822]: 2024-06-25T18:43:05.347490Z INFO Daemon Daemon Detect protocol endpoint Jun 25 18:43:05.350134 waagent[1822]: 2024-06-25T18:43:05.349998Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:43:05.363080 waagent[1822]: 2024-06-25T18:43:05.350224Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 18:43:05.363080 waagent[1822]: 2024-06-25T18:43:05.351234Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 18:43:05.363080 waagent[1822]: 2024-06-25T18:43:05.351822Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 18:43:05.363080 waagent[1822]: 2024-06-25T18:43:05.352172Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 18:43:05.394011 waagent[1822]: 2024-06-25T18:43:05.393950Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 18:43:05.402798 waagent[1822]: 2024-06-25T18:43:05.394392Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 18:43:05.402798 waagent[1822]: 2024-06-25T18:43:05.394802Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 18:43:05.538455 waagent[1822]: 2024-06-25T18:43:05.538298Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 18:43:05.542592 waagent[1822]: 2024-06-25T18:43:05.542522Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 18:43:05.549395 waagent[1822]: 2024-06-25T18:43:05.549341Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:43:05.558628 waagent[1822]: 2024-06-25T18:43:05.558576Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 18:43:05.574821 waagent[1822]: 2024-06-25T18:43:05.559204Z INFO Daemon Jun 25 18:43:05.574821 waagent[1822]: 2024-06-25T18:43:05.559412Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3d0b2db6-f4fc-4fa4-a37e-dde114947304 eTag: 18110325183745592813 source: Fabric] Jun 25 18:43:05.574821 waagent[1822]: 2024-06-25T18:43:05.560112Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 18:43:05.574821 waagent[1822]: 2024-06-25T18:43:05.561226Z INFO Daemon Jun 25 18:43:05.574821 waagent[1822]: 2024-06-25T18:43:05.562197Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:43:05.577812 waagent[1822]: 2024-06-25T18:43:05.577765Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 18:43:05.677413 waagent[1822]: 2024-06-25T18:43:05.677331Z INFO Daemon Downloaded certificate {'thumbprint': '76303BA7F18694F11951F99892AE53A613A7AD4D', 'hasPrivateKey': True} Jun 25 18:43:05.682783 waagent[1822]: 2024-06-25T18:43:05.682720Z INFO Daemon Downloaded certificate {'thumbprint': '206FAC9998C8FA12F97D7D64BBA5F2D99E34555A', 'hasPrivateKey': False} Jun 25 18:43:05.688094 waagent[1822]: 2024-06-25T18:43:05.688035Z INFO Daemon Fetch goal state completed Jun 25 18:43:05.698416 waagent[1822]: 2024-06-25T18:43:05.698374Z INFO Daemon Daemon Starting provisioning Jun 25 18:43:05.705924 waagent[1822]: 2024-06-25T18:43:05.700840Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 18:43:05.705924 waagent[1822]: 2024-06-25T18:43:05.701015Z INFO Daemon Daemon Set hostname [ci-4012.0.0-a-d50f1c7422] Jun 25 18:43:05.716931 waagent[1822]: 2024-06-25T18:43:05.716875Z INFO Daemon Daemon Publish hostname [ci-4012.0.0-a-d50f1c7422] Jun 25 18:43:05.725189 waagent[1822]: 2024-06-25T18:43:05.717285Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 18:43:05.725189 waagent[1822]: 2024-06-25T18:43:05.717771Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 18:43:05.741785 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:05.741795 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:05.741843 systemd-networkd[1359]: eth0: DHCP lease lost Jun 25 18:43:05.743069 waagent[1822]: 2024-06-25T18:43:05.742998Z INFO Daemon Daemon Create user account if not exists Jun 25 18:43:05.746078 waagent[1822]: 2024-06-25T18:43:05.745912Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 18:43:05.746078 waagent[1822]: 2024-06-25T18:43:05.746111Z INFO Daemon Daemon Configure sudoer Jun 25 18:43:05.746078 waagent[1822]: 2024-06-25T18:43:05.747359Z INFO Daemon Daemon Configure sshd Jun 25 18:43:05.746078 waagent[1822]: 2024-06-25T18:43:05.747752Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 18:43:05.746078 waagent[1822]: 2024-06-25T18:43:05.748066Z INFO Daemon Daemon Deploy ssh public key. Jun 25 18:43:05.760397 systemd-networkd[1359]: eth0: DHCPv6 lease lost Jun 25 18:43:05.794354 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 25 18:43:07.043004 waagent[1822]: 2024-06-25T18:43:07.042916Z INFO Daemon Daemon Provisioning complete Jun 25 18:43:07.057422 waagent[1822]: 2024-06-25T18:43:07.057359Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 18:43:07.064863 waagent[1822]: 2024-06-25T18:43:07.057690Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 18:43:07.064863 waagent[1822]: 2024-06-25T18:43:07.058306Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 18:43:07.181376 waagent[1905]: 2024-06-25T18:43:07.181284Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 18:43:07.181815 waagent[1905]: 2024-06-25T18:43:07.181439Z INFO ExtHandler ExtHandler OS: flatcar 4012.0.0 Jun 25 18:43:07.181815 waagent[1905]: 2024-06-25T18:43:07.181520Z INFO ExtHandler ExtHandler Python: 3.11.9 Jun 25 18:43:07.219419 waagent[1905]: 2024-06-25T18:43:07.219323Z INFO ExtHandler ExtHandler Distro: flatcar-4012.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 18:43:07.219661 waagent[1905]: 2024-06-25T18:43:07.219603Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:07.219770 waagent[1905]: 2024-06-25T18:43:07.219721Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:07.227590 waagent[1905]: 2024-06-25T18:43:07.227518Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:43:07.233233 waagent[1905]: 2024-06-25T18:43:07.233178Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 18:43:07.233671 waagent[1905]: 2024-06-25T18:43:07.233615Z INFO ExtHandler Jun 25 18:43:07.233740 waagent[1905]: 2024-06-25T18:43:07.233706Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 24770840-06ec-4d75-ae7b-35f37fdfccd8 eTag: 18110325183745592813 source: Fabric] Jun 25 18:43:07.234044 waagent[1905]: 2024-06-25T18:43:07.233993Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 18:43:07.234616 waagent[1905]: 2024-06-25T18:43:07.234560Z INFO ExtHandler Jun 25 18:43:07.234684 waagent[1905]: 2024-06-25T18:43:07.234642Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:43:07.238559 waagent[1905]: 2024-06-25T18:43:07.238507Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 18:43:07.317303 waagent[1905]: 2024-06-25T18:43:07.317160Z INFO ExtHandler Downloaded certificate {'thumbprint': '76303BA7F18694F11951F99892AE53A613A7AD4D', 'hasPrivateKey': True} Jun 25 18:43:07.317672 waagent[1905]: 2024-06-25T18:43:07.317617Z INFO ExtHandler Downloaded certificate {'thumbprint': '206FAC9998C8FA12F97D7D64BBA5F2D99E34555A', 'hasPrivateKey': False} Jun 25 18:43:07.318094 waagent[1905]: 2024-06-25T18:43:07.318042Z INFO ExtHandler Fetch goal state completed Jun 25 18:43:07.334158 waagent[1905]: 2024-06-25T18:43:07.334093Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1905 Jun 25 18:43:07.334313 waagent[1905]: 2024-06-25T18:43:07.334253Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 18:43:07.335819 waagent[1905]: 2024-06-25T18:43:07.335760Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4012.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 18:43:07.336184 waagent[1905]: 2024-06-25T18:43:07.336132Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 18:43:07.356441 waagent[1905]: 2024-06-25T18:43:07.356394Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 18:43:07.356646 waagent[1905]: 2024-06-25T18:43:07.356595Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 18:43:07.363134 waagent[1905]: 2024-06-25T18:43:07.363092Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 18:43:07.369905 systemd[1]: Reloading requested from client PID 1920 ('systemctl') (unit waagent.service)... Jun 25 18:43:07.369921 systemd[1]: Reloading... Jun 25 18:43:07.443903 zram_generator::config[1948]: No configuration found. Jun 25 18:43:07.569358 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:07.645783 systemd[1]: Reloading finished in 275 ms. Jun 25 18:43:07.669146 waagent[1905]: 2024-06-25T18:43:07.668665Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 18:43:07.684624 systemd[1]: Reloading requested from client PID 2008 ('systemctl') (unit waagent.service)... Jun 25 18:43:07.684645 systemd[1]: Reloading... Jun 25 18:43:07.766544 zram_generator::config[2039]: No configuration found. Jun 25 18:43:07.878825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:07.958004 systemd[1]: Reloading finished in 272 ms. Jun 25 18:43:07.986038 waagent[1905]: 2024-06-25T18:43:07.985926Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 18:43:07.987303 waagent[1905]: 2024-06-25T18:43:07.986143Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 18:43:10.594141 waagent[1905]: 2024-06-25T18:43:10.594042Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 18:43:10.594955 waagent[1905]: 2024-06-25T18:43:10.594880Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 18:43:10.595876 waagent[1905]: 2024-06-25T18:43:10.595825Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 18:43:10.595998 waagent[1905]: 2024-06-25T18:43:10.595953Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:10.596504 waagent[1905]: 2024-06-25T18:43:10.596451Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:10.596569 waagent[1905]: 2024-06-25T18:43:10.596523Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 18:43:10.596824 waagent[1905]: 2024-06-25T18:43:10.596766Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 18:43:10.596917 waagent[1905]: 2024-06-25T18:43:10.596872Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 18:43:10.597157 waagent[1905]: 2024-06-25T18:43:10.597115Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:43:10.597728 waagent[1905]: 2024-06-25T18:43:10.597671Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 18:43:10.597837 waagent[1905]: 2024-06-25T18:43:10.597785Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 18:43:10.598350 waagent[1905]: 2024-06-25T18:43:10.598296Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 18:43:10.598495 waagent[1905]: 2024-06-25T18:43:10.598439Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:43:10.598565 waagent[1905]: 2024-06-25T18:43:10.598503Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 18:43:10.598684 waagent[1905]: 2024-06-25T18:43:10.598642Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 18:43:10.598684 waagent[1905]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 18:43:10.598684 waagent[1905]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 18:43:10.598684 waagent[1905]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 18:43:10.598684 waagent[1905]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:10.598684 waagent[1905]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:10.598684 waagent[1905]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:43:10.599834 waagent[1905]: 2024-06-25T18:43:10.599781Z INFO EnvHandler ExtHandler Configure routes Jun 25 18:43:10.600451 waagent[1905]: 2024-06-25T18:43:10.600404Z INFO EnvHandler ExtHandler Gateway:None Jun 25 18:43:10.600885 waagent[1905]: 2024-06-25T18:43:10.600845Z INFO EnvHandler ExtHandler Routes:None Jun 25 18:43:10.605640 waagent[1905]: 2024-06-25T18:43:10.605583Z INFO ExtHandler ExtHandler Jun 25 18:43:10.605993 waagent[1905]: 2024-06-25T18:43:10.605938Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: eb1775ba-2c56-46b4-b703-974cc9b4ef50 correlation 0b1201c6-c4b4-4935-b19c-172d1f6e728b created: 2024-06-25T18:41:57.343132Z] Jun 25 18:43:10.607009 waagent[1905]: 2024-06-25T18:43:10.606963Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 18:43:10.608302 waagent[1905]: 2024-06-25T18:43:10.608244Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jun 25 18:43:10.644800 waagent[1905]: 2024-06-25T18:43:10.644742Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 00728ADD-B496-4E5B-939C-4B57C26677B7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 18:43:10.682652 waagent[1905]: 2024-06-25T18:43:10.682573Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 18:43:10.682652 waagent[1905]: Executing ['ip', '-a', '-o', 'link']: Jun 25 18:43:10.682652 waagent[1905]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 18:43:10.682652 waagent[1905]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:8c:7d brd ff:ff:ff:ff:ff:ff Jun 25 18:43:10.682652 waagent[1905]: 3: enP15383s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:8c:7d brd ff:ff:ff:ff:ff:ff\ altname enP15383p0s2 Jun 25 18:43:10.682652 waagent[1905]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 18:43:10.682652 waagent[1905]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 18:43:10.682652 waagent[1905]: 2: eth0 inet 10.200.8.40/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 18:43:10.682652 waagent[1905]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 18:43:10.682652 waagent[1905]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 25 18:43:10.682652 waagent[1905]: 2: eth0 inet6 fe80::222:48ff:fea0:8c7d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:43:10.682652 waagent[1905]: 3: enP15383s1 inet6 fe80::222:48ff:fea0:8c7d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:43:10.708938 waagent[1905]: 2024-06-25T18:43:10.708881Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 18:43:10.708938 waagent[1905]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:10.708938 waagent[1905]: pkts bytes target prot opt in out source destination Jun 25 18:43:10.708938 waagent[1905]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:10.708938 waagent[1905]: pkts bytes target prot opt in out source destination Jun 25 18:43:10.708938 waagent[1905]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:10.708938 waagent[1905]: pkts bytes target prot opt in out source destination Jun 25 18:43:10.708938 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:43:10.708938 waagent[1905]: 5 457 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:43:10.708938 waagent[1905]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:43:10.712244 waagent[1905]: 2024-06-25T18:43:10.712189Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 18:43:10.712244 waagent[1905]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:10.712244 waagent[1905]: pkts bytes target prot opt in out source destination Jun 25 18:43:10.712244 waagent[1905]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:10.712244 waagent[1905]: pkts bytes target prot opt in out source destination Jun 25 18:43:10.712244 waagent[1905]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:43:10.712244 waagent[1905]: pkts bytes target prot opt in out source destination Jun 25 18:43:10.712244 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:43:10.712244 waagent[1905]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:43:10.712244 waagent[1905]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:43:10.712617 waagent[1905]: 2024-06-25T18:43:10.712503Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 18:43:14.724691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:43:14.731526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:14.827004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:14.831488 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:15.396522 kubelet[2135]: E0625 18:43:15.396459 2135 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:15.400623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:15.400788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:25.449930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:43:25.456557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:25.544383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:25.548813 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:26.046993 chronyd[1699]: Selected source PHC0 Jun 25 18:43:26.087114 kubelet[2151]: E0625 18:43:26.087053 2151 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:26.089762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:26.089961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:34.570889 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:43:34.575569 systemd[1]: Started sshd@0-10.200.8.40:22-10.200.16.10:41628.service - OpenSSH per-connection server daemon (10.200.16.10:41628). Jun 25 18:43:35.269621 sshd[2160]: Accepted publickey for core from 10.200.16.10 port 41628 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:35.271381 sshd[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:35.276604 systemd-logind[1697]: New session 3 of user core. Jun 25 18:43:35.285433 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:43:35.866474 systemd[1]: Started sshd@1-10.200.8.40:22-10.200.16.10:41640.service - OpenSSH per-connection server daemon (10.200.16.10:41640). Jun 25 18:43:36.200019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:43:36.207503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:36.295338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:36.306633 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:36.522000 sshd[2165]: Accepted publickey for core from 10.200.16.10 port 41640 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:36.523574 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:36.527622 systemd-logind[1697]: New session 4 of user core. Jun 25 18:43:36.534441 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:43:36.823364 kubelet[2175]: E0625 18:43:36.823092 2175 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:36.825852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:36.826039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:36.980844 sshd[2165]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:36.985090 systemd[1]: sshd@1-10.200.8.40:22-10.200.16.10:41640.service: Deactivated successfully. Jun 25 18:43:36.987130 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:43:36.987884 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:43:36.988764 systemd-logind[1697]: Removed session 4. Jun 25 18:43:37.093247 systemd[1]: Started sshd@2-10.200.8.40:22-10.200.16.10:41650.service - OpenSSH per-connection server daemon (10.200.16.10:41650). Jun 25 18:43:37.739052 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 41650 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:37.740795 sshd[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:37.745442 systemd-logind[1697]: New session 5 of user core. Jun 25 18:43:37.754627 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:43:38.195803 sshd[2188]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:38.200512 systemd[1]: sshd@2-10.200.8.40:22-10.200.16.10:41650.service: Deactivated successfully. Jun 25 18:43:38.202712 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:43:38.203467 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:43:38.204325 systemd-logind[1697]: Removed session 5. Jun 25 18:43:38.308992 systemd[1]: Started sshd@3-10.200.8.40:22-10.200.16.10:41656.service - OpenSSH per-connection server daemon (10.200.16.10:41656). Jun 25 18:43:38.960028 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 41656 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:38.962587 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:38.967979 systemd-logind[1697]: New session 6 of user core. Jun 25 18:43:38.978661 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:43:39.459588 sshd[2195]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:39.463064 systemd[1]: sshd@3-10.200.8.40:22-10.200.16.10:41656.service: Deactivated successfully. Jun 25 18:43:39.465321 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:43:39.467156 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:43:39.468356 systemd-logind[1697]: Removed session 6. Jun 25 18:43:39.579532 systemd[1]: Started sshd@4-10.200.8.40:22-10.200.16.10:41664.service - OpenSSH per-connection server daemon (10.200.16.10:41664). Jun 25 18:43:40.288415 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 41664 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:40.290113 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:40.295319 systemd-logind[1697]: New session 7 of user core. Jun 25 18:43:40.301655 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:43:40.863380 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:43:40.863725 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:40.875556 sudo[2205]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:40.979774 sshd[2202]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:40.984730 systemd[1]: sshd@4-10.200.8.40:22-10.200.16.10:41664.service: Deactivated successfully. Jun 25 18:43:40.987006 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:43:40.987915 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:43:40.988915 systemd-logind[1697]: Removed session 7. Jun 25 18:43:41.093508 systemd[1]: Started sshd@5-10.200.8.40:22-10.200.16.10:41680.service - OpenSSH per-connection server daemon (10.200.16.10:41680). Jun 25 18:43:41.744119 sshd[2210]: Accepted publickey for core from 10.200.16.10 port 41680 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:41.745868 sshd[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:41.751379 systemd-logind[1697]: New session 8 of user core. Jun 25 18:43:41.757419 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:43:42.098887 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:43:42.099740 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:42.103071 sudo[2217]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:42.107788 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:43:42.108091 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:42.124831 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:42.126285 auditctl[2220]: No rules Jun 25 18:43:42.126628 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:43:42.126820 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:42.129446 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:42.153823 augenrules[2238]: No rules Jun 25 18:43:42.155126 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:42.156157 sudo[2216]: pam_unix(sudo:session): session closed for user root Jun 25 18:43:42.260257 sshd[2210]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:42.264822 systemd[1]: sshd@5-10.200.8.40:22-10.200.16.10:41680.service: Deactivated successfully. Jun 25 18:43:42.266619 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:43:42.267370 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:43:42.268213 systemd-logind[1697]: Removed session 8. Jun 25 18:43:42.377589 systemd[1]: Started sshd@6-10.200.8.40:22-10.200.16.10:41690.service - OpenSSH per-connection server daemon (10.200.16.10:41690). Jun 25 18:43:43.015689 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 41690 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:43:43.017399 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:43.022223 systemd-logind[1697]: New session 9 of user core. Jun 25 18:43:43.030437 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:43:43.368773 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:43:43.369110 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:43:44.004558 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:43:44.006390 (dockerd)[2258]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:43:45.328243 dockerd[2258]: time="2024-06-25T18:43:45.328181290Z" level=info msg="Starting up" Jun 25 18:43:45.464310 dockerd[2258]: time="2024-06-25T18:43:45.464165711Z" level=info msg="Loading containers: start." Jun 25 18:43:45.613292 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 25 18:43:45.665355 kernel: Initializing XFRM netlink socket Jun 25 18:43:45.867641 systemd-networkd[1359]: docker0: Link UP Jun 25 18:43:45.887040 dockerd[2258]: time="2024-06-25T18:43:45.886991985Z" level=info msg="Loading containers: done." Jun 25 18:43:46.208739 dockerd[2258]: time="2024-06-25T18:43:46.208685770Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:43:46.208960 dockerd[2258]: time="2024-06-25T18:43:46.208927070Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:43:46.209088 dockerd[2258]: time="2024-06-25T18:43:46.209056171Z" level=info msg="Daemon has completed initialization" Jun 25 18:43:46.257298 dockerd[2258]: time="2024-06-25T18:43:46.257149313Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:43:46.257704 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:43:46.950325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 18:43:46.956543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:47.090088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:47.094684 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:47.137626 kubelet[2390]: E0625 18:43:47.137555 2390 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:47.140113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:47.140338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:47.608014 update_engine[1701]: I0625 18:43:47.607867 1701 update_attempter.cc:509] Updating boot flags... Jun 25 18:43:48.667294 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2410) Jun 25 18:43:48.776329 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2314) Jun 25 18:43:49.277927 containerd[1716]: time="2024-06-25T18:43:49.277805688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 25 18:43:50.156539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988727231.mount: Deactivated successfully. Jun 25 18:43:52.560029 containerd[1716]: time="2024-06-25T18:43:52.559914559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:52.563205 containerd[1716]: time="2024-06-25T18:43:52.563062264Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235845" Jun 25 18:43:52.567616 containerd[1716]: time="2024-06-25T18:43:52.567561971Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:52.573534 containerd[1716]: time="2024-06-25T18:43:52.573486380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:52.574796 containerd[1716]: time="2024-06-25T18:43:52.574623082Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 3.296769394s" Jun 25 18:43:52.574796 containerd[1716]: time="2024-06-25T18:43:52.574665882Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jun 25 18:43:52.595897 containerd[1716]: time="2024-06-25T18:43:52.595861414Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 25 18:43:55.115893 containerd[1716]: time="2024-06-25T18:43:55.115831641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.118946 containerd[1716]: time="2024-06-25T18:43:55.118793746Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069755" Jun 25 18:43:55.123625 containerd[1716]: time="2024-06-25T18:43:55.123563753Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.132513 containerd[1716]: time="2024-06-25T18:43:55.132459967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:55.133940 containerd[1716]: time="2024-06-25T18:43:55.133447868Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.537545354s" Jun 25 18:43:55.133940 containerd[1716]: time="2024-06-25T18:43:55.133489668Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jun 25 18:43:55.155005 containerd[1716]: time="2024-06-25T18:43:55.154971201Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 18:43:56.668098 containerd[1716]: time="2024-06-25T18:43:56.668043099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:56.671212 containerd[1716]: time="2024-06-25T18:43:56.671057703Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153811" Jun 25 18:43:56.673958 containerd[1716]: time="2024-06-25T18:43:56.673907708Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:56.679823 containerd[1716]: time="2024-06-25T18:43:56.679751517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:43:56.680761 containerd[1716]: time="2024-06-25T18:43:56.680728218Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.525716117s" Jun 25 18:43:56.680950 containerd[1716]: time="2024-06-25T18:43:56.680855418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jun 25 18:43:56.701945 containerd[1716]: time="2024-06-25T18:43:56.701913350Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 25 18:43:57.199750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 18:43:57.211242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:57.335485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:57.346561 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:57.410837 kubelet[2550]: E0625 18:43:57.410777 2550 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:57.413149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:57.413384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:59.716903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992492698.mount: Deactivated successfully. Jun 25 18:44:01.522424 containerd[1716]: time="2024-06-25T18:44:01.522361495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:01.524351 containerd[1716]: time="2024-06-25T18:44:01.524292297Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409342" Jun 25 18:44:01.527953 containerd[1716]: time="2024-06-25T18:44:01.527897000Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:01.533391 containerd[1716]: time="2024-06-25T18:44:01.533338605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:01.534070 containerd[1716]: time="2024-06-25T18:44:01.533917306Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 4.831960856s" Jun 25 18:44:01.534070 containerd[1716]: time="2024-06-25T18:44:01.533958706Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jun 25 18:44:01.555242 containerd[1716]: time="2024-06-25T18:44:01.555198926Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:44:02.176931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209343093.mount: Deactivated successfully. Jun 25 18:44:03.605786 containerd[1716]: time="2024-06-25T18:44:03.605724356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:03.609064 containerd[1716]: time="2024-06-25T18:44:03.608999659Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jun 25 18:44:03.613737 containerd[1716]: time="2024-06-25T18:44:03.613681963Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:03.619312 containerd[1716]: time="2024-06-25T18:44:03.619247668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:03.620491 containerd[1716]: time="2024-06-25T18:44:03.620355269Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.065109843s" Jun 25 18:44:03.620491 containerd[1716]: time="2024-06-25T18:44:03.620394570Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 18:44:03.640751 containerd[1716]: time="2024-06-25T18:44:03.640723889Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:44:04.089257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670323497.mount: Deactivated successfully. Jun 25 18:44:04.114302 containerd[1716]: time="2024-06-25T18:44:04.114247234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:04.118277 containerd[1716]: time="2024-06-25T18:44:04.118208938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 18:44:04.128704 containerd[1716]: time="2024-06-25T18:44:04.128633748Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:04.133458 containerd[1716]: time="2024-06-25T18:44:04.133403052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:04.134809 containerd[1716]: time="2024-06-25T18:44:04.134215353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 493.436564ms" Jun 25 18:44:04.134809 containerd[1716]: time="2024-06-25T18:44:04.134253053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:44:04.154620 containerd[1716]: time="2024-06-25T18:44:04.154586772Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:44:04.789813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154454963.mount: Deactivated successfully. Jun 25 18:44:07.449801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 18:44:07.459534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:12.506735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:12.513949 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:12.557964 kubelet[2646]: E0625 18:44:12.557911 2646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:12.560438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:12.560649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:15.630251 containerd[1716]: time="2024-06-25T18:44:15.630185036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:15.632523 containerd[1716]: time="2024-06-25T18:44:15.632409438Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jun 25 18:44:15.636415 containerd[1716]: time="2024-06-25T18:44:15.636347240Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:15.641835 containerd[1716]: time="2024-06-25T18:44:15.641668443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:15.643433 containerd[1716]: time="2024-06-25T18:44:15.643079244Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 11.488453872s" Jun 25 18:44:15.643433 containerd[1716]: time="2024-06-25T18:44:15.643120044Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:44:18.821972 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:18.828549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:18.859117 systemd[1]: Reloading requested from client PID 2753 ('systemctl') (unit session-9.scope)... Jun 25 18:44:18.859136 systemd[1]: Reloading... Jun 25 18:44:18.977291 zram_generator::config[2793]: No configuration found. Jun 25 18:44:19.091075 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:19.166801 systemd[1]: Reloading finished in 307 ms. Jun 25 18:44:19.214111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:19.219713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:19.221445 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:19.221651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:19.228628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:20.362179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:20.371610 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:20.420729 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:20.420729 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:20.420729 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:20.421192 kubelet[2862]: I0625 18:44:20.420772 2862 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:21.952592 kubelet[2862]: I0625 18:44:20.668427 2862 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 18:44:21.952592 kubelet[2862]: I0625 18:44:20.668459 2862 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:21.952592 kubelet[2862]: I0625 18:44:20.669206 2862 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 18:44:21.952592 kubelet[2862]: E0625 18:44:20.687707 2862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:21.952592 kubelet[2862]: I0625 18:44:20.688442 2862 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:21.952592 kubelet[2862]: I0625 18:44:20.696798 2862 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:21.952592 kubelet[2862]: I0625 18:44:20.698067 2862 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:21.953438 kubelet[2862]: I0625 18:44:20.698236 2862 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:21.953438 kubelet[2862]: I0625 18:44:20.698257 2862 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:21.953438 kubelet[2862]: I0625 18:44:20.698277 2862 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:21.953438 kubelet[2862]: I0625 18:44:21.953002 2862 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:21.953438 kubelet[2862]: I0625 18:44:21.953214 2862 kubelet.go:396] "Attempting to sync node with API server" Jun 25 18:44:21.953438 kubelet[2862]: I0625 18:44:21.953243 2862 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:21.953438 kubelet[2862]: I0625 18:44:21.953336 2862 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:44:21.953902 kubelet[2862]: I0625 18:44:21.953364 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:21.959409 kubelet[2862]: W0625 18:44:21.957546 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:21.959409 kubelet[2862]: E0625 18:44:21.957642 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:21.959409 kubelet[2862]: I0625 18:44:21.957824 2862 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:21.959905 kubelet[2862]: W0625 18:44:21.959702 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-d50f1c7422&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:21.959905 kubelet[2862]: E0625 18:44:21.959868 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-d50f1c7422&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:21.965293 kubelet[2862]: I0625 18:44:21.964614 2862 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:44:21.965293 kubelet[2862]: W0625 18:44:21.964685 2862 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:44:21.965458 kubelet[2862]: I0625 18:44:21.965445 2862 server.go:1256] "Started kubelet" Jun 25 18:44:21.967085 kubelet[2862]: I0625 18:44:21.966918 2862 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:21.968192 kubelet[2862]: I0625 18:44:21.967820 2862 server.go:461] "Adding debug handlers to kubelet server" Jun 25 18:44:21.971295 kubelet[2862]: I0625 18:44:21.970401 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:21.971295 kubelet[2862]: I0625 18:44:21.970779 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:44:21.971295 kubelet[2862]: I0625 18:44:21.970987 2862 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:21.973297 kubelet[2862]: E0625 18:44:21.973258 2862 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012.0.0-a-d50f1c7422.17dc53961e2e6a85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.0.0-a-d50f1c7422,UID:ci-4012.0.0-a-d50f1c7422,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-a-d50f1c7422,},FirstTimestamp:2024-06-25 18:44:21.965245061 +0000 UTC m=+1.588685135,LastTimestamp:2024-06-25 18:44:21.965245061 +0000 UTC m=+1.588685135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-a-d50f1c7422,}" Jun 25 18:44:21.976710 kubelet[2862]: E0625 18:44:21.976690 2862 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:21.976858 kubelet[2862]: I0625 18:44:21.976827 2862 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:21.978452 kubelet[2862]: I0625 18:44:21.978428 2862 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:21.979613 kubelet[2862]: I0625 18:44:21.979349 2862 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:21.980510 kubelet[2862]: E0625 18:44:21.980439 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-d50f1c7422?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="200ms" Jun 25 18:44:21.980969 kubelet[2862]: W0625 18:44:21.980672 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:21.980969 kubelet[2862]: E0625 18:44:21.980726 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:21.982331 kubelet[2862]: I0625 18:44:21.982262 2862 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:44:21.982898 kubelet[2862]: I0625 18:44:21.982863 2862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:44:21.985024 kubelet[2862]: I0625 18:44:21.985006 2862 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:44:22.018136 kubelet[2862]: I0625 18:44:22.018097 2862 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:22.018136 kubelet[2862]: I0625 18:44:22.018124 2862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:22.018321 kubelet[2862]: I0625 18:44:22.018152 2862 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:22.079423 kubelet[2862]: I0625 18:44:22.079377 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.079833 kubelet[2862]: E0625 18:44:22.079810 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.107491 kubelet[2862]: I0625 18:44:22.107447 2862 policy_none.go:49] "None policy: Start" Jun 25 18:44:22.108404 kubelet[2862]: I0625 18:44:22.108380 2862 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:44:22.108510 kubelet[2862]: I0625 18:44:22.108419 2862 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:22.117122 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:44:22.125982 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:44:22.129389 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:44:22.140798 kubelet[2862]: I0625 18:44:22.139844 2862 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:22.140798 kubelet[2862]: I0625 18:44:22.140158 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:22.144709 kubelet[2862]: I0625 18:44:22.144685 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:22.144948 kubelet[2862]: E0625 18:44:22.144933 2862 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-a-d50f1c7422\" not found" Jun 25 18:44:22.147053 kubelet[2862]: I0625 18:44:22.147037 2862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:22.147132 kubelet[2862]: I0625 18:44:22.147068 2862 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:22.147132 kubelet[2862]: I0625 18:44:22.147087 2862 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 18:44:22.147132 kubelet[2862]: E0625 18:44:22.147131 2862 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jun 25 18:44:22.148105 kubelet[2862]: W0625 18:44:22.147942 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:22.148105 kubelet[2862]: E0625 18:44:22.147998 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:22.181765 kubelet[2862]: E0625 18:44:22.181732 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-d50f1c7422?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="400ms" Jun 25 18:44:22.248291 kubelet[2862]: I0625 18:44:22.248110 2862 topology_manager.go:215] "Topology Admit Handler" podUID="c2217986557928b946199e82fe17737e" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.250616 kubelet[2862]: I0625 18:44:22.250560 2862 topology_manager.go:215] "Topology Admit Handler" podUID="bf73af8a8f552d40b4eb9f67e4d720d6" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.253049 kubelet[2862]: I0625 18:44:22.252740 2862 topology_manager.go:215] "Topology Admit Handler" podUID="d22f14deedaeadd2a14810aa35e463d6" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.260717 systemd[1]: Created slice kubepods-burstable-podc2217986557928b946199e82fe17737e.slice - libcontainer container kubepods-burstable-podc2217986557928b946199e82fe17737e.slice. Jun 25 18:44:22.280938 kubelet[2862]: I0625 18:44:22.280476 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d22f14deedaeadd2a14810aa35e463d6-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-d50f1c7422\" (UID: \"d22f14deedaeadd2a14810aa35e463d6\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.280938 kubelet[2862]: I0625 18:44:22.280531 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2217986557928b946199e82fe17737e-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-d50f1c7422\" (UID: \"c2217986557928b946199e82fe17737e\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.280938 kubelet[2862]: I0625 18:44:22.280563 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2217986557928b946199e82fe17737e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-d50f1c7422\" (UID: \"c2217986557928b946199e82fe17737e\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.280938 kubelet[2862]: I0625 18:44:22.280590 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.280938 kubelet[2862]: I0625 18:44:22.280618 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.281226 kubelet[2862]: I0625 18:44:22.280650 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.281226 kubelet[2862]: I0625 18:44:22.280703 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.281226 kubelet[2862]: I0625 18:44:22.280753 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.281226 kubelet[2862]: I0625 18:44:22.280788 2862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2217986557928b946199e82fe17737e-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-d50f1c7422\" (UID: \"c2217986557928b946199e82fe17737e\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.283026 kubelet[2862]: I0625 18:44:22.282615 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.283026 kubelet[2862]: E0625 18:44:22.282976 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.284936 systemd[1]: Created slice kubepods-burstable-podd22f14deedaeadd2a14810aa35e463d6.slice - libcontainer container kubepods-burstable-podd22f14deedaeadd2a14810aa35e463d6.slice. Jun 25 18:44:22.289091 systemd[1]: Created slice kubepods-burstable-podbf73af8a8f552d40b4eb9f67e4d720d6.slice - libcontainer container kubepods-burstable-podbf73af8a8f552d40b4eb9f67e4d720d6.slice. Jun 25 18:44:22.581452 containerd[1716]: time="2024-06-25T18:44:22.581317531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-d50f1c7422,Uid:c2217986557928b946199e82fe17737e,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:22.582856 kubelet[2862]: E0625 18:44:22.582808 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-d50f1c7422?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="800ms" Jun 25 18:44:22.588568 containerd[1716]: time="2024-06-25T18:44:22.588511321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-d50f1c7422,Uid:d22f14deedaeadd2a14810aa35e463d6,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:22.592638 containerd[1716]: time="2024-06-25T18:44:22.592605015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-d50f1c7422,Uid:bf73af8a8f552d40b4eb9f67e4d720d6,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:22.684860 kubelet[2862]: I0625 18:44:22.684826 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.685183 kubelet[2862]: E0625 18:44:22.685163 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:22.715285 kubelet[2862]: E0625 18:44:22.715239 2862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:22.775777 kubelet[2862]: W0625 18:44:22.775713 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:22.775777 kubelet[2862]: E0625 18:44:22.775784 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:22.969302 kubelet[2862]: W0625 18:44:22.969243 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:22.969302 kubelet[2862]: E0625 18:44:22.969309 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:23.192774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025714277.mount: Deactivated successfully. Jun 25 18:44:23.227436 containerd[1716]: time="2024-06-25T18:44:23.227328019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:23.234502 containerd[1716]: time="2024-06-25T18:44:23.234441509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 18:44:23.236491 containerd[1716]: time="2024-06-25T18:44:23.236447506Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:23.239719 containerd[1716]: time="2024-06-25T18:44:23.239679601Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:23.242002 containerd[1716]: time="2024-06-25T18:44:23.241847798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:23.245061 containerd[1716]: time="2024-06-25T18:44:23.245024694Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:23.246893 containerd[1716]: time="2024-06-25T18:44:23.246624792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:23.250154 containerd[1716]: time="2024-06-25T18:44:23.250122587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:23.250892 containerd[1716]: time="2024-06-25T18:44:23.250857486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 662.250865ms" Jun 25 18:44:23.252459 containerd[1716]: time="2024-06-25T18:44:23.252427683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 670.947552ms" Jun 25 18:44:23.256079 containerd[1716]: time="2024-06-25T18:44:23.256045878Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 663.364463ms" Jun 25 18:44:23.338837 kubelet[2862]: W0625 18:44:23.338697 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:23.339092 kubelet[2862]: E0625 18:44:23.338848 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:23.383379 kubelet[2862]: E0625 18:44:23.383348 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-d50f1c7422?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="1.6s" Jun 25 18:44:23.487764 kubelet[2862]: I0625 18:44:23.487427 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:23.487894 kubelet[2862]: E0625 18:44:23.487817 2862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:23.540587 kubelet[2862]: W0625 18:44:23.540519 2862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-d50f1c7422&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:23.540587 kubelet[2862]: E0625 18:44:23.540593 2862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-d50f1c7422&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Jun 25 18:44:23.974589 containerd[1716]: time="2024-06-25T18:44:23.974499064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:23.976052 containerd[1716]: time="2024-06-25T18:44:23.974829863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:23.979622 containerd[1716]: time="2024-06-25T18:44:23.976642161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:23.979622 containerd[1716]: time="2024-06-25T18:44:23.976664461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:23.987076 containerd[1716]: time="2024-06-25T18:44:23.987001646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:23.989497 containerd[1716]: time="2024-06-25T18:44:23.989316943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:23.989497 containerd[1716]: time="2024-06-25T18:44:23.989346843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:23.989497 containerd[1716]: time="2024-06-25T18:44:23.989370943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:23.990532 containerd[1716]: time="2024-06-25T18:44:23.990469941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:23.990847 containerd[1716]: time="2024-06-25T18:44:23.990727041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:23.991008 containerd[1716]: time="2024-06-25T18:44:23.990970041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:23.991130 containerd[1716]: time="2024-06-25T18:44:23.991105540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:24.007627 systemd[1]: Started cri-containerd-f6208e605821b1f35ada3470623b1349daa3728509d92030d2444aebc5f49471.scope - libcontainer container f6208e605821b1f35ada3470623b1349daa3728509d92030d2444aebc5f49471. Jun 25 18:44:24.019954 systemd[1]: Started cri-containerd-e7d3d7f24fcff50ede445a6de5da48af7e8d211226e25fd452d9fcbe350ce420.scope - libcontainer container e7d3d7f24fcff50ede445a6de5da48af7e8d211226e25fd452d9fcbe350ce420. Jun 25 18:44:24.025607 systemd[1]: Started cri-containerd-2c4f8a796288c731a78da22cbd1c79327176230a1dcb48c8474b52818c99c886.scope - libcontainer container 2c4f8a796288c731a78da22cbd1c79327176230a1dcb48c8474b52818c99c886. Jun 25 18:44:24.099851 containerd[1716]: time="2024-06-25T18:44:24.099809687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-d50f1c7422,Uid:d22f14deedaeadd2a14810aa35e463d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6208e605821b1f35ada3470623b1349daa3728509d92030d2444aebc5f49471\"" Jun 25 18:44:24.110048 containerd[1716]: time="2024-06-25T18:44:24.109751173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-d50f1c7422,Uid:c2217986557928b946199e82fe17737e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7d3d7f24fcff50ede445a6de5da48af7e8d211226e25fd452d9fcbe350ce420\"" Jun 25 18:44:24.112724 containerd[1716]: time="2024-06-25T18:44:24.112668869Z" level=info msg="CreateContainer within sandbox \"f6208e605821b1f35ada3470623b1349daa3728509d92030d2444aebc5f49471\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:44:24.115317 containerd[1716]: time="2024-06-25T18:44:24.115100765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-d50f1c7422,Uid:bf73af8a8f552d40b4eb9f67e4d720d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c4f8a796288c731a78da22cbd1c79327176230a1dcb48c8474b52818c99c886\"" Jun 25 18:44:24.117444 containerd[1716]: time="2024-06-25T18:44:24.117398362Z" level=info msg="CreateContainer within sandbox \"e7d3d7f24fcff50ede445a6de5da48af7e8d211226e25fd452d9fcbe350ce420\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:44:24.119689 containerd[1716]: time="2024-06-25T18:44:24.119661759Z" level=info msg="CreateContainer within sandbox \"2c4f8a796288c731a78da22cbd1c79327176230a1dcb48c8474b52818c99c886\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:44:24.178479 containerd[1716]: time="2024-06-25T18:44:24.178417876Z" level=info msg="CreateContainer within sandbox \"f6208e605821b1f35ada3470623b1349daa3728509d92030d2444aebc5f49471\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5\"" Jun 25 18:44:24.179014 containerd[1716]: time="2024-06-25T18:44:24.178965075Z" level=info msg="StartContainer for \"1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5\"" Jun 25 18:44:24.204348 containerd[1716]: time="2024-06-25T18:44:24.204193239Z" level=info msg="CreateContainer within sandbox \"2c4f8a796288c731a78da22cbd1c79327176230a1dcb48c8474b52818c99c886\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f\"" Jun 25 18:44:24.205048 containerd[1716]: time="2024-06-25T18:44:24.205021738Z" level=info msg="StartContainer for \"96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f\"" Jun 25 18:44:24.205987 containerd[1716]: time="2024-06-25T18:44:24.205693837Z" level=info msg="CreateContainer within sandbox \"e7d3d7f24fcff50ede445a6de5da48af7e8d211226e25fd452d9fcbe350ce420\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5dbbeb21704d09f4dd87206c2b5d60e7a5e67886f9af2ba4156536c2d1fc8811\"" Jun 25 18:44:24.206519 containerd[1716]: time="2024-06-25T18:44:24.206493636Z" level=info msg="StartContainer for \"5dbbeb21704d09f4dd87206c2b5d60e7a5e67886f9af2ba4156536c2d1fc8811\"" Jun 25 18:44:24.225553 systemd[1]: Started cri-containerd-1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5.scope - libcontainer container 1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5. Jun 25 18:44:24.269458 systemd[1]: Started cri-containerd-96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f.scope - libcontainer container 96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f. Jun 25 18:44:24.279392 systemd[1]: Started cri-containerd-5dbbeb21704d09f4dd87206c2b5d60e7a5e67886f9af2ba4156536c2d1fc8811.scope - libcontainer container 5dbbeb21704d09f4dd87206c2b5d60e7a5e67886f9af2ba4156536c2d1fc8811. Jun 25 18:44:24.324075 containerd[1716]: time="2024-06-25T18:44:24.324031870Z" level=info msg="StartContainer for \"1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5\" returns successfully" Jun 25 18:44:24.359234 containerd[1716]: time="2024-06-25T18:44:24.359182121Z" level=info msg="StartContainer for \"96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f\" returns successfully" Jun 25 18:44:24.380505 containerd[1716]: time="2024-06-25T18:44:24.380460391Z" level=info msg="StartContainer for \"5dbbeb21704d09f4dd87206c2b5d60e7a5e67886f9af2ba4156536c2d1fc8811\" returns successfully" Jun 25 18:44:25.091692 kubelet[2862]: I0625 18:44:25.091656 2862 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:26.439604 kubelet[2862]: E0625 18:44:26.439557 2862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.0.0-a-d50f1c7422\" not found" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:26.531446 kubelet[2862]: I0625 18:44:26.530378 2862 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:26.559103 kubelet[2862]: E0625 18:44:26.559053 2862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-d50f1c7422\" not found" Jun 25 18:44:26.659581 kubelet[2862]: E0625 18:44:26.659532 2862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-d50f1c7422\" not found" Jun 25 18:44:26.960626 kubelet[2862]: I0625 18:44:26.959114 2862 apiserver.go:52] "Watching apiserver" Jun 25 18:44:26.979536 kubelet[2862]: I0625 18:44:26.979501 2862 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:27.200882 kubelet[2862]: E0625 18:44:27.200842 2862 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4012.0.0-a-d50f1c7422\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:27.201399 kubelet[2862]: E0625 18:44:27.200849 2862 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.0.0-a-d50f1c7422\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:28.026308 kubelet[2862]: W0625 18:44:28.025856 2862 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:29.476653 systemd[1]: Reloading requested from client PID 3137 ('systemctl') (unit session-9.scope)... Jun 25 18:44:29.476670 systemd[1]: Reloading... Jun 25 18:44:29.561323 zram_generator::config[3174]: No configuration found. Jun 25 18:44:29.704395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:29.801970 systemd[1]: Reloading finished in 324 ms. Jun 25 18:44:29.839948 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:29.849644 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:29.849895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:29.857698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:29.960032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:29.968615 (kubelet)[3241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:30.015449 kubelet[3241]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:30.015449 kubelet[3241]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:30.015449 kubelet[3241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:30.015973 kubelet[3241]: I0625 18:44:30.015495 3241 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:30.020122 kubelet[3241]: I0625 18:44:30.020088 3241 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 18:44:30.020122 kubelet[3241]: I0625 18:44:30.020114 3241 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:30.020358 kubelet[3241]: I0625 18:44:30.020337 3241 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 18:44:30.022183 kubelet[3241]: I0625 18:44:30.022154 3241 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:44:30.023969 kubelet[3241]: I0625 18:44:30.023837 3241 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:30.032670 kubelet[3241]: I0625 18:44:30.032624 3241 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:30.032891 kubelet[3241]: I0625 18:44:30.032860 3241 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:30.033060 kubelet[3241]: I0625 18:44:30.033040 3241 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:30.033193 kubelet[3241]: I0625 18:44:30.033071 3241 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:30.033193 kubelet[3241]: I0625 18:44:30.033083 3241 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:30.033193 kubelet[3241]: I0625 18:44:30.033120 3241 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:30.033342 kubelet[3241]: I0625 18:44:30.033224 3241 kubelet.go:396] "Attempting to sync node with API server" Jun 25 18:44:30.033342 kubelet[3241]: I0625 18:44:30.033241 3241 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:30.033342 kubelet[3241]: I0625 18:44:30.033281 3241 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:44:30.033342 kubelet[3241]: I0625 18:44:30.033301 3241 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:30.038290 kubelet[3241]: I0625 18:44:30.037379 3241 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:30.038290 kubelet[3241]: I0625 18:44:30.037573 3241 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:44:30.038290 kubelet[3241]: I0625 18:44:30.038027 3241 server.go:1256] "Started kubelet" Jun 25 18:44:30.041048 kubelet[3241]: I0625 18:44:30.040900 3241 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:30.045215 kubelet[3241]: I0625 18:44:30.045190 3241 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:30.046176 kubelet[3241]: I0625 18:44:30.046152 3241 server.go:461] "Adding debug handlers to kubelet server" Jun 25 18:44:30.051300 kubelet[3241]: I0625 18:44:30.048573 3241 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:44:30.051300 kubelet[3241]: I0625 18:44:30.048754 3241 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:30.051300 kubelet[3241]: I0625 18:44:30.049577 3241 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:30.051300 kubelet[3241]: I0625 18:44:30.049674 3241 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:30.051300 kubelet[3241]: I0625 18:44:30.049811 3241 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:30.061475 kubelet[3241]: I0625 18:44:30.061386 3241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:30.064610 kubelet[3241]: I0625 18:44:30.064582 3241 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:44:30.064703 kubelet[3241]: I0625 18:44:30.064685 3241 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:44:30.064991 kubelet[3241]: E0625 18:44:30.064959 3241 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:30.065310 kubelet[3241]: I0625 18:44:30.065172 3241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:30.065310 kubelet[3241]: I0625 18:44:30.065202 3241 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:30.068555 kubelet[3241]: I0625 18:44:30.066130 3241 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 18:44:30.068555 kubelet[3241]: E0625 18:44:30.066188 3241 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:30.070299 kubelet[3241]: I0625 18:44:30.069717 3241 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:44:30.123532 kubelet[3241]: I0625 18:44:30.123499 3241 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:30.123532 kubelet[3241]: I0625 18:44:30.123521 3241 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:30.123532 kubelet[3241]: I0625 18:44:30.123541 3241 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:30.123780 kubelet[3241]: I0625 18:44:30.123694 3241 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:44:30.123780 kubelet[3241]: I0625 18:44:30.123719 3241 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:44:30.123780 kubelet[3241]: I0625 18:44:30.123729 3241 policy_none.go:49] "None policy: Start" Jun 25 18:44:30.124732 kubelet[3241]: I0625 18:44:30.124499 3241 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:44:30.124732 kubelet[3241]: I0625 18:44:30.124530 3241 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:30.124975 kubelet[3241]: I0625 18:44:30.124893 3241 state_mem.go:75] "Updated machine memory state" Jun 25 18:44:30.131438 kubelet[3241]: I0625 18:44:30.130415 3241 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:30.131438 kubelet[3241]: I0625 18:44:30.130667 3241 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:30.152581 kubelet[3241]: I0625 18:44:30.152561 3241 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.161891 kubelet[3241]: I0625 18:44:30.161869 3241 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.162051 kubelet[3241]: I0625 18:44:30.162032 3241 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.166536 kubelet[3241]: I0625 18:44:30.166509 3241 topology_manager.go:215] "Topology Admit Handler" podUID="c2217986557928b946199e82fe17737e" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.166755 kubelet[3241]: I0625 18:44:30.166743 3241 topology_manager.go:215] "Topology Admit Handler" podUID="bf73af8a8f552d40b4eb9f67e4d720d6" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.167386 kubelet[3241]: I0625 18:44:30.167353 3241 topology_manager.go:215] "Topology Admit Handler" podUID="d22f14deedaeadd2a14810aa35e463d6" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.175055 kubelet[3241]: W0625 18:44:30.174870 3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:30.176338 kubelet[3241]: W0625 18:44:30.176287 3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:30.179722 kubelet[3241]: W0625 18:44:30.179693 3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:30.179803 kubelet[3241]: E0625 18:44:30.179772 3241 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" already exists" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252084 kubelet[3241]: I0625 18:44:30.252024 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2217986557928b946199e82fe17737e-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-d50f1c7422\" (UID: \"c2217986557928b946199e82fe17737e\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252084 kubelet[3241]: I0625 18:44:30.252083 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252431 kubelet[3241]: I0625 18:44:30.252120 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d22f14deedaeadd2a14810aa35e463d6-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-d50f1c7422\" (UID: \"d22f14deedaeadd2a14810aa35e463d6\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252431 kubelet[3241]: I0625 18:44:30.252154 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252431 kubelet[3241]: I0625 18:44:30.252185 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252431 kubelet[3241]: I0625 18:44:30.252218 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2217986557928b946199e82fe17737e-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-d50f1c7422\" (UID: \"c2217986557928b946199e82fe17737e\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252431 kubelet[3241]: I0625 18:44:30.252255 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2217986557928b946199e82fe17737e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-d50f1c7422\" (UID: \"c2217986557928b946199e82fe17737e\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252588 kubelet[3241]: I0625 18:44:30.252304 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:30.252588 kubelet[3241]: I0625 18:44:30.252344 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bf73af8a8f552d40b4eb9f67e4d720d6-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-d50f1c7422\" (UID: \"bf73af8a8f552d40b4eb9f67e4d720d6\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:31.033890 kubelet[3241]: I0625 18:44:31.033843 3241 apiserver.go:52] "Watching apiserver" Jun 25 18:44:31.050505 kubelet[3241]: I0625 18:44:31.050451 3241 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:31.087667 kubelet[3241]: I0625 18:44:31.087260 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" podStartSLOduration=1.087215982 podStartE2EDuration="1.087215982s" podCreationTimestamp="2024-06-25 18:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:31.087175982 +0000 UTC m=+1.114384389" watchObservedRunningTime="2024-06-25 18:44:31.087215982 +0000 UTC m=+1.114424389" Jun 25 18:44:31.103658 kubelet[3241]: I0625 18:44:31.103568 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-d50f1c7422" podStartSLOduration=3.103525196 podStartE2EDuration="3.103525196s" podCreationTimestamp="2024-06-25 18:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:31.103355396 +0000 UTC m=+1.130563703" watchObservedRunningTime="2024-06-25 18:44:31.103525196 +0000 UTC m=+1.130733503" Jun 25 18:44:31.104054 kubelet[3241]: I0625 18:44:31.103677 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-a-d50f1c7422" podStartSLOduration=1.103651796 podStartE2EDuration="1.103651796s" podCreationTimestamp="2024-06-25 18:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:31.095035889 +0000 UTC m=+1.122244196" watchObservedRunningTime="2024-06-25 18:44:31.103651796 +0000 UTC m=+1.130860203" Jun 25 18:44:31.112587 kubelet[3241]: W0625 18:44:31.112545 3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:44:31.112672 kubelet[3241]: E0625 18:44:31.112620 3241 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.0.0-a-d50f1c7422\" already exists" pod="kube-system/kube-apiserver-ci-4012.0.0-a-d50f1c7422" Jun 25 18:44:36.290091 sudo[2249]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:36.393133 sshd[2246]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:36.397500 systemd[1]: sshd@6-10.200.8.40:22-10.200.16.10:41690.service: Deactivated successfully. Jun 25 18:44:36.399566 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:44:36.399811 systemd[1]: session-9.scope: Consumed 4.167s CPU time, 137.7M memory peak, 0B memory swap peak. Jun 25 18:44:36.400412 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:44:36.401348 systemd-logind[1697]: Removed session 9. Jun 25 18:44:41.748045 kubelet[3241]: I0625 18:44:41.748012 3241 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:44:41.748710 kubelet[3241]: I0625 18:44:41.748658 3241 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:44:41.748798 containerd[1716]: time="2024-06-25T18:44:41.748432242Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:44:42.743109 kubelet[3241]: I0625 18:44:42.742510 3241 topology_manager.go:215] "Topology Admit Handler" podUID="2f52fca3-ec4d-41be-89db-5b74de955910" podNamespace="kube-system" podName="kube-proxy-pkf6t" Jun 25 18:44:42.755352 systemd[1]: Created slice kubepods-besteffort-pod2f52fca3_ec4d_41be_89db_5b74de955910.slice - libcontainer container kubepods-besteffort-pod2f52fca3_ec4d_41be_89db_5b74de955910.slice. Jun 25 18:44:42.815554 kubelet[3241]: I0625 18:44:42.814700 3241 topology_manager.go:215] "Topology Admit Handler" podUID="8bf2877f-2e27-414d-a577-1d8d70a983c9" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-zjd4f" Jun 25 18:44:42.824733 systemd[1]: Created slice kubepods-besteffort-pod8bf2877f_2e27_414d_a577_1d8d70a983c9.slice - libcontainer container kubepods-besteffort-pod8bf2877f_2e27_414d_a577_1d8d70a983c9.slice. Jun 25 18:44:42.838612 kubelet[3241]: I0625 18:44:42.838583 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prn8d\" (UniqueName: \"kubernetes.io/projected/8bf2877f-2e27-414d-a577-1d8d70a983c9-kube-api-access-prn8d\") pod \"tigera-operator-76c4974c85-zjd4f\" (UID: \"8bf2877f-2e27-414d-a577-1d8d70a983c9\") " pod="tigera-operator/tigera-operator-76c4974c85-zjd4f" Jun 25 18:44:42.838612 kubelet[3241]: I0625 18:44:42.838636 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f52fca3-ec4d-41be-89db-5b74de955910-kube-proxy\") pod \"kube-proxy-pkf6t\" (UID: \"2f52fca3-ec4d-41be-89db-5b74de955910\") " pod="kube-system/kube-proxy-pkf6t" Jun 25 18:44:42.838836 kubelet[3241]: I0625 18:44:42.838724 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f52fca3-ec4d-41be-89db-5b74de955910-lib-modules\") pod \"kube-proxy-pkf6t\" (UID: \"2f52fca3-ec4d-41be-89db-5b74de955910\") " pod="kube-system/kube-proxy-pkf6t" Jun 25 18:44:42.838836 kubelet[3241]: I0625 18:44:42.838774 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjtq\" (UniqueName: \"kubernetes.io/projected/2f52fca3-ec4d-41be-89db-5b74de955910-kube-api-access-qvjtq\") pod \"kube-proxy-pkf6t\" (UID: \"2f52fca3-ec4d-41be-89db-5b74de955910\") " pod="kube-system/kube-proxy-pkf6t" Jun 25 18:44:42.838836 kubelet[3241]: I0625 18:44:42.838811 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8bf2877f-2e27-414d-a577-1d8d70a983c9-var-lib-calico\") pod \"tigera-operator-76c4974c85-zjd4f\" (UID: \"8bf2877f-2e27-414d-a577-1d8d70a983c9\") " pod="tigera-operator/tigera-operator-76c4974c85-zjd4f" Jun 25 18:44:42.838972 kubelet[3241]: I0625 18:44:42.838864 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f52fca3-ec4d-41be-89db-5b74de955910-xtables-lock\") pod \"kube-proxy-pkf6t\" (UID: \"2f52fca3-ec4d-41be-89db-5b74de955910\") " pod="kube-system/kube-proxy-pkf6t" Jun 25 18:44:43.064676 containerd[1716]: time="2024-06-25T18:44:43.064521786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pkf6t,Uid:2f52fca3-ec4d-41be-89db-5b74de955910,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:43.109115 containerd[1716]: time="2024-06-25T18:44:43.108756109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:43.109115 containerd[1716]: time="2024-06-25T18:44:43.108815809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.109115 containerd[1716]: time="2024-06-25T18:44:43.108842209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:43.109115 containerd[1716]: time="2024-06-25T18:44:43.108901809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.129969 containerd[1716]: time="2024-06-25T18:44:43.129926167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-zjd4f,Uid:8bf2877f-2e27-414d-a577-1d8d70a983c9,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:44:43.138422 systemd[1]: Started cri-containerd-a1e537fafd8cbe6fa41e5232f5bb6f09f34c43a2e90fd41e951733b84f8e8498.scope - libcontainer container a1e537fafd8cbe6fa41e5232f5bb6f09f34c43a2e90fd41e951733b84f8e8498. Jun 25 18:44:43.160527 containerd[1716]: time="2024-06-25T18:44:43.159487049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pkf6t,Uid:2f52fca3-ec4d-41be-89db-5b74de955910,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e537fafd8cbe6fa41e5232f5bb6f09f34c43a2e90fd41e951733b84f8e8498\"" Jun 25 18:44:43.166005 containerd[1716]: time="2024-06-25T18:44:43.165957367Z" level=info msg="CreateContainer within sandbox \"a1e537fafd8cbe6fa41e5232f5bb6f09f34c43a2e90fd41e951733b84f8e8498\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:44:43.183774 containerd[1716]: time="2024-06-25T18:44:43.183455016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:43.183774 containerd[1716]: time="2024-06-25T18:44:43.183536916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.183774 containerd[1716]: time="2024-06-25T18:44:43.183564816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:43.183774 containerd[1716]: time="2024-06-25T18:44:43.183611516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:43.202460 systemd[1]: Started cri-containerd-685d1ce91e5691bd4cc2b2ecf8af79a7f81310f1afa3194605e8844cbd08b496.scope - libcontainer container 685d1ce91e5691bd4cc2b2ecf8af79a7f81310f1afa3194605e8844cbd08b496. Jun 25 18:44:43.213558 containerd[1716]: time="2024-06-25T18:44:43.213503899Z" level=info msg="CreateContainer within sandbox \"a1e537fafd8cbe6fa41e5232f5bb6f09f34c43a2e90fd41e951733b84f8e8498\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8303986c5603495f2735acf0900a37692de7cf945061b4783290ca0651c2150\"" Jun 25 18:44:43.215835 containerd[1716]: time="2024-06-25T18:44:43.214601502Z" level=info msg="StartContainer for \"e8303986c5603495f2735acf0900a37692de7cf945061b4783290ca0651c2150\"" Jun 25 18:44:43.250534 systemd[1]: Started cri-containerd-e8303986c5603495f2735acf0900a37692de7cf945061b4783290ca0651c2150.scope - libcontainer container e8303986c5603495f2735acf0900a37692de7cf945061b4783290ca0651c2150. Jun 25 18:44:43.257613 containerd[1716]: time="2024-06-25T18:44:43.257537721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-zjd4f,Uid:8bf2877f-2e27-414d-a577-1d8d70a983c9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"685d1ce91e5691bd4cc2b2ecf8af79a7f81310f1afa3194605e8844cbd08b496\"" Jun 25 18:44:43.261304 containerd[1716]: time="2024-06-25T18:44:43.261277831Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:44:43.357645 containerd[1716]: time="2024-06-25T18:44:43.357533798Z" level=info msg="StartContainer for \"e8303986c5603495f2735acf0900a37692de7cf945061b4783290ca0651c2150\" returns successfully" Jun 25 18:44:43.956106 systemd[1]: run-containerd-runc-k8s.io-a1e537fafd8cbe6fa41e5232f5bb6f09f34c43a2e90fd41e951733b84f8e8498-runc.AtOzVh.mount: Deactivated successfully. Jun 25 18:44:44.991413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364089632.mount: Deactivated successfully. Jun 25 18:44:46.092083 containerd[1716]: time="2024-06-25T18:44:46.092034301Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:46.095119 containerd[1716]: time="2024-06-25T18:44:46.094991604Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076072" Jun 25 18:44:46.099119 containerd[1716]: time="2024-06-25T18:44:46.099052407Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:46.105665 containerd[1716]: time="2024-06-25T18:44:46.105627513Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:46.106590 containerd[1716]: time="2024-06-25T18:44:46.106445213Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.844999381s" Jun 25 18:44:46.106590 containerd[1716]: time="2024-06-25T18:44:46.106481213Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:44:46.108383 containerd[1716]: time="2024-06-25T18:44:46.108347615Z" level=info msg="CreateContainer within sandbox \"685d1ce91e5691bd4cc2b2ecf8af79a7f81310f1afa3194605e8844cbd08b496\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:44:46.149073 containerd[1716]: time="2024-06-25T18:44:46.149038749Z" level=info msg="CreateContainer within sandbox \"685d1ce91e5691bd4cc2b2ecf8af79a7f81310f1afa3194605e8844cbd08b496\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573\"" Jun 25 18:44:46.149721 containerd[1716]: time="2024-06-25T18:44:46.149436150Z" level=info msg="StartContainer for \"aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573\"" Jun 25 18:44:46.181431 systemd[1]: Started cri-containerd-aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573.scope - libcontainer container aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573. Jun 25 18:44:46.207774 containerd[1716]: time="2024-06-25T18:44:46.207691900Z" level=info msg="StartContainer for \"aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573\" returns successfully" Jun 25 18:44:47.141561 kubelet[3241]: I0625 18:44:47.141517 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pkf6t" podStartSLOduration=5.141475813 podStartE2EDuration="5.141475813s" podCreationTimestamp="2024-06-25 18:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:44.137451457 +0000 UTC m=+14.164659864" watchObservedRunningTime="2024-06-25 18:44:47.141475813 +0000 UTC m=+17.168684120" Jun 25 18:44:49.355313 kubelet[3241]: I0625 18:44:49.353733 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-zjd4f" podStartSLOduration=4.50592535 podStartE2EDuration="7.353664039s" podCreationTimestamp="2024-06-25 18:44:42 +0000 UTC" firstStartedPulling="2024-06-25 18:44:43.259121525 +0000 UTC m=+13.286329832" lastFinishedPulling="2024-06-25 18:44:46.106860214 +0000 UTC m=+16.134068521" observedRunningTime="2024-06-25 18:44:47.141850713 +0000 UTC m=+17.169059020" watchObservedRunningTime="2024-06-25 18:44:49.353664039 +0000 UTC m=+19.380872346" Jun 25 18:44:49.355313 kubelet[3241]: I0625 18:44:49.353896 3241 topology_manager.go:215] "Topology Admit Handler" podUID="6444057f-1b7c-4c51-a9c6-f097e94e922f" podNamespace="calico-system" podName="calico-typha-9c454cc78-66ls8" Jun 25 18:44:49.365447 systemd[1]: Created slice kubepods-besteffort-pod6444057f_1b7c_4c51_a9c6_f097e94e922f.slice - libcontainer container kubepods-besteffort-pod6444057f_1b7c_4c51_a9c6_f097e94e922f.slice. Jun 25 18:44:49.384579 kubelet[3241]: I0625 18:44:49.384398 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prc4r\" (UniqueName: \"kubernetes.io/projected/6444057f-1b7c-4c51-a9c6-f097e94e922f-kube-api-access-prc4r\") pod \"calico-typha-9c454cc78-66ls8\" (UID: \"6444057f-1b7c-4c51-a9c6-f097e94e922f\") " pod="calico-system/calico-typha-9c454cc78-66ls8" Jun 25 18:44:49.384579 kubelet[3241]: I0625 18:44:49.384450 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6444057f-1b7c-4c51-a9c6-f097e94e922f-typha-certs\") pod \"calico-typha-9c454cc78-66ls8\" (UID: \"6444057f-1b7c-4c51-a9c6-f097e94e922f\") " pod="calico-system/calico-typha-9c454cc78-66ls8" Jun 25 18:44:49.384579 kubelet[3241]: I0625 18:44:49.384483 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6444057f-1b7c-4c51-a9c6-f097e94e922f-tigera-ca-bundle\") pod \"calico-typha-9c454cc78-66ls8\" (UID: \"6444057f-1b7c-4c51-a9c6-f097e94e922f\") " pod="calico-system/calico-typha-9c454cc78-66ls8" Jun 25 18:44:49.437515 kubelet[3241]: I0625 18:44:49.435767 3241 topology_manager.go:215] "Topology Admit Handler" podUID="81e73e0b-5f53-4b67-82ce-77339b72a50c" podNamespace="calico-system" podName="calico-node-2k7nb" Jun 25 18:44:49.445917 systemd[1]: Created slice kubepods-besteffort-pod81e73e0b_5f53_4b67_82ce_77339b72a50c.slice - libcontainer container kubepods-besteffort-pod81e73e0b_5f53_4b67_82ce_77339b72a50c.slice. Jun 25 18:44:49.485252 kubelet[3241]: I0625 18:44:49.485207 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-cni-bin-dir\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485409 kubelet[3241]: I0625 18:44:49.485284 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7trzr\" (UniqueName: \"kubernetes.io/projected/81e73e0b-5f53-4b67-82ce-77339b72a50c-kube-api-access-7trzr\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485409 kubelet[3241]: I0625 18:44:49.485315 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/81e73e0b-5f53-4b67-82ce-77339b72a50c-node-certs\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485409 kubelet[3241]: I0625 18:44:49.485342 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-var-run-calico\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485409 kubelet[3241]: I0625 18:44:49.485376 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-var-lib-calico\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485409 kubelet[3241]: I0625 18:44:49.485404 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-policysync\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485621 kubelet[3241]: I0625 18:44:49.485434 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81e73e0b-5f53-4b67-82ce-77339b72a50c-tigera-ca-bundle\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485621 kubelet[3241]: I0625 18:44:49.485482 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-lib-modules\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485621 kubelet[3241]: I0625 18:44:49.485510 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-cni-log-dir\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485621 kubelet[3241]: I0625 18:44:49.485555 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-xtables-lock\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485621 kubelet[3241]: I0625 18:44:49.485585 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-cni-net-dir\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.485813 kubelet[3241]: I0625 18:44:49.485615 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/81e73e0b-5f53-4b67-82ce-77339b72a50c-flexvol-driver-host\") pod \"calico-node-2k7nb\" (UID: \"81e73e0b-5f53-4b67-82ce-77339b72a50c\") " pod="calico-system/calico-node-2k7nb" Jun 25 18:44:49.588301 kubelet[3241]: I0625 18:44:49.587967 3241 topology_manager.go:215] "Topology Admit Handler" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" podNamespace="calico-system" podName="csi-node-driver-42k4m" Jun 25 18:44:49.589247 kubelet[3241]: E0625 18:44:49.588803 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:44:49.605872 kubelet[3241]: E0625 18:44:49.604362 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.605872 kubelet[3241]: W0625 18:44:49.604381 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.605872 kubelet[3241]: E0625 18:44:49.604417 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.628925 kubelet[3241]: E0625 18:44:49.628844 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.628925 kubelet[3241]: W0625 18:44:49.628865 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.628925 kubelet[3241]: E0625 18:44:49.628890 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.666224 kubelet[3241]: E0625 18:44:49.666164 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.666658 kubelet[3241]: W0625 18:44:49.666286 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.666658 kubelet[3241]: E0625 18:44:49.666318 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.666870 kubelet[3241]: E0625 18:44:49.666848 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.666870 kubelet[3241]: W0625 18:44:49.666866 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.667004 kubelet[3241]: E0625 18:44:49.666885 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.667111 kubelet[3241]: E0625 18:44:49.667091 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.667111 kubelet[3241]: W0625 18:44:49.667106 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.667254 kubelet[3241]: E0625 18:44:49.667125 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.667384 kubelet[3241]: E0625 18:44:49.667365 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.667442 kubelet[3241]: W0625 18:44:49.667380 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.667442 kubelet[3241]: E0625 18:44:49.667405 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.667651 kubelet[3241]: E0625 18:44:49.667635 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.667714 kubelet[3241]: W0625 18:44:49.667647 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.667714 kubelet[3241]: E0625 18:44:49.667677 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.667894 kubelet[3241]: E0625 18:44:49.667880 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.667959 kubelet[3241]: W0625 18:44:49.667892 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.667959 kubelet[3241]: E0625 18:44:49.667923 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.668134 kubelet[3241]: E0625 18:44:49.668120 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.668191 kubelet[3241]: W0625 18:44:49.668133 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.668191 kubelet[3241]: E0625 18:44:49.668166 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.668405 kubelet[3241]: E0625 18:44:49.668391 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.668405 kubelet[3241]: W0625 18:44:49.668403 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.668571 kubelet[3241]: E0625 18:44:49.668418 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.668641 kubelet[3241]: E0625 18:44:49.668613 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.668641 kubelet[3241]: W0625 18:44:49.668624 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.668641 kubelet[3241]: E0625 18:44:49.668639 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.668870 kubelet[3241]: E0625 18:44:49.668852 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.668870 kubelet[3241]: W0625 18:44:49.668865 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.669022 kubelet[3241]: E0625 18:44:49.668888 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.669485 kubelet[3241]: E0625 18:44:49.669366 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.669485 kubelet[3241]: W0625 18:44:49.669382 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.669485 kubelet[3241]: E0625 18:44:49.669399 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.669735 kubelet[3241]: E0625 18:44:49.669722 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.669884 kubelet[3241]: W0625 18:44:49.669806 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.669884 kubelet[3241]: E0625 18:44:49.669828 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.670218 kubelet[3241]: E0625 18:44:49.670191 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.670218 kubelet[3241]: W0625 18:44:49.670220 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.670722 kubelet[3241]: E0625 18:44:49.670237 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.670722 kubelet[3241]: E0625 18:44:49.670542 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.670722 kubelet[3241]: W0625 18:44:49.670554 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.670722 kubelet[3241]: E0625 18:44:49.670577 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.670931 kubelet[3241]: E0625 18:44:49.670787 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.670931 kubelet[3241]: W0625 18:44:49.670797 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.670931 kubelet[3241]: E0625 18:44:49.670826 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.671852 kubelet[3241]: E0625 18:44:49.671054 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.671852 kubelet[3241]: W0625 18:44:49.671080 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.671852 kubelet[3241]: E0625 18:44:49.671096 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.671852 kubelet[3241]: E0625 18:44:49.671389 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.671852 kubelet[3241]: W0625 18:44:49.671400 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.671852 kubelet[3241]: E0625 18:44:49.671416 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.671852 kubelet[3241]: E0625 18:44:49.671620 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.671852 kubelet[3241]: W0625 18:44:49.671629 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.671852 kubelet[3241]: E0625 18:44:49.671643 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.671852 kubelet[3241]: E0625 18:44:49.671846 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.672305 kubelet[3241]: W0625 18:44:49.671857 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.672305 kubelet[3241]: E0625 18:44:49.671871 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.672305 kubelet[3241]: E0625 18:44:49.672090 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.672305 kubelet[3241]: W0625 18:44:49.672099 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.672305 kubelet[3241]: E0625 18:44:49.672114 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.672853 containerd[1716]: time="2024-06-25T18:44:49.672811017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9c454cc78-66ls8,Uid:6444057f-1b7c-4c51-a9c6-f097e94e922f,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:49.686499 kubelet[3241]: E0625 18:44:49.686479 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.686499 kubelet[3241]: W0625 18:44:49.686497 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.686632 kubelet[3241]: E0625 18:44:49.686515 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.686632 kubelet[3241]: I0625 18:44:49.686570 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/459a93ef-d13a-41c8-9d3f-50f9914d6a1a-registration-dir\") pod \"csi-node-driver-42k4m\" (UID: \"459a93ef-d13a-41c8-9d3f-50f9914d6a1a\") " pod="calico-system/csi-node-driver-42k4m" Jun 25 18:44:49.686989 kubelet[3241]: E0625 18:44:49.686831 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.686989 kubelet[3241]: W0625 18:44:49.686847 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.686989 kubelet[3241]: E0625 18:44:49.686886 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.686989 kubelet[3241]: I0625 18:44:49.686914 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mppkz\" (UniqueName: \"kubernetes.io/projected/459a93ef-d13a-41c8-9d3f-50f9914d6a1a-kube-api-access-mppkz\") pod \"csi-node-driver-42k4m\" (UID: \"459a93ef-d13a-41c8-9d3f-50f9914d6a1a\") " pod="calico-system/csi-node-driver-42k4m" Jun 25 18:44:49.687185 kubelet[3241]: E0625 18:44:49.687154 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.687185 kubelet[3241]: W0625 18:44:49.687165 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.687299 kubelet[3241]: E0625 18:44:49.687189 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.687779 kubelet[3241]: E0625 18:44:49.687755 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.687779 kubelet[3241]: W0625 18:44:49.687773 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.687898 kubelet[3241]: E0625 18:44:49.687790 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.688670 kubelet[3241]: E0625 18:44:49.688648 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.688670 kubelet[3241]: W0625 18:44:49.688666 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.688880 kubelet[3241]: E0625 18:44:49.688793 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.689577 kubelet[3241]: I0625 18:44:49.689552 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/459a93ef-d13a-41c8-9d3f-50f9914d6a1a-kubelet-dir\") pod \"csi-node-driver-42k4m\" (UID: \"459a93ef-d13a-41c8-9d3f-50f9914d6a1a\") " pod="calico-system/csi-node-driver-42k4m" Jun 25 18:44:49.689646 kubelet[3241]: E0625 18:44:49.689469 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.689646 kubelet[3241]: W0625 18:44:49.689610 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.689646 kubelet[3241]: E0625 18:44:49.689626 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.689960 kubelet[3241]: E0625 18:44:49.689906 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.689960 kubelet[3241]: W0625 18:44:49.689920 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.689960 kubelet[3241]: E0625 18:44:49.689960 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.690285 kubelet[3241]: E0625 18:44:49.690223 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.690285 kubelet[3241]: W0625 18:44:49.690247 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.690398 kubelet[3241]: E0625 18:44:49.690380 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.690711 kubelet[3241]: E0625 18:44:49.690554 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.690711 kubelet[3241]: W0625 18:44:49.690567 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.690711 kubelet[3241]: E0625 18:44:49.690585 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.690711 kubelet[3241]: I0625 18:44:49.690626 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/459a93ef-d13a-41c8-9d3f-50f9914d6a1a-varrun\") pod \"csi-node-driver-42k4m\" (UID: \"459a93ef-d13a-41c8-9d3f-50f9914d6a1a\") " pod="calico-system/csi-node-driver-42k4m" Jun 25 18:44:49.690909 kubelet[3241]: E0625 18:44:49.690870 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.690909 kubelet[3241]: W0625 18:44:49.690884 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.690909 kubelet[3241]: E0625 18:44:49.690900 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.691205 kubelet[3241]: E0625 18:44:49.691176 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.691205 kubelet[3241]: W0625 18:44:49.691189 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.692560 kubelet[3241]: E0625 18:44:49.691227 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.692560 kubelet[3241]: E0625 18:44:49.691490 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.692560 kubelet[3241]: W0625 18:44:49.691501 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.692560 kubelet[3241]: E0625 18:44:49.691554 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.692560 kubelet[3241]: E0625 18:44:49.691804 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.692560 kubelet[3241]: W0625 18:44:49.691819 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.692560 kubelet[3241]: E0625 18:44:49.691841 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.692560 kubelet[3241]: I0625 18:44:49.691880 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/459a93ef-d13a-41c8-9d3f-50f9914d6a1a-socket-dir\") pod \"csi-node-driver-42k4m\" (UID: \"459a93ef-d13a-41c8-9d3f-50f9914d6a1a\") " pod="calico-system/csi-node-driver-42k4m" Jun 25 18:44:49.692560 kubelet[3241]: E0625 18:44:49.692264 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.692941 kubelet[3241]: W0625 18:44:49.692302 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.692941 kubelet[3241]: E0625 18:44:49.692321 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.693493 kubelet[3241]: E0625 18:44:49.693398 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.693493 kubelet[3241]: W0625 18:44:49.693415 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.693493 kubelet[3241]: E0625 18:44:49.693434 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.735239 containerd[1716]: time="2024-06-25T18:44:49.735070371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:49.735239 containerd[1716]: time="2024-06-25T18:44:49.735138471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:49.735239 containerd[1716]: time="2024-06-25T18:44:49.735170771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:49.735239 containerd[1716]: time="2024-06-25T18:44:49.735190671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:49.752507 containerd[1716]: time="2024-06-25T18:44:49.752457986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2k7nb,Uid:81e73e0b-5f53-4b67-82ce-77339b72a50c,Namespace:calico-system,Attempt:0,}" Jun 25 18:44:49.761584 systemd[1]: Started cri-containerd-7d7893977fd526585d69eead11179a7fcb15bac96103ba48886a45a351e7bd2f.scope - libcontainer container 7d7893977fd526585d69eead11179a7fcb15bac96103ba48886a45a351e7bd2f. Jun 25 18:44:49.792822 kubelet[3241]: E0625 18:44:49.792685 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.792822 kubelet[3241]: W0625 18:44:49.792706 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.792822 kubelet[3241]: E0625 18:44:49.792731 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.793514 kubelet[3241]: E0625 18:44:49.793292 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.793514 kubelet[3241]: W0625 18:44:49.793309 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.793514 kubelet[3241]: E0625 18:44:49.793340 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.794055 kubelet[3241]: E0625 18:44:49.793837 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.794055 kubelet[3241]: W0625 18:44:49.793850 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.794055 kubelet[3241]: E0625 18:44:49.793880 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.794502 kubelet[3241]: E0625 18:44:49.794344 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.794502 kubelet[3241]: W0625 18:44:49.794358 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.794502 kubelet[3241]: E0625 18:44:49.794389 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.795014 kubelet[3241]: E0625 18:44:49.794808 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.795014 kubelet[3241]: W0625 18:44:49.794820 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.795014 kubelet[3241]: E0625 18:44:49.794902 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.795399 kubelet[3241]: E0625 18:44:49.795242 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.795399 kubelet[3241]: W0625 18:44:49.795256 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.795399 kubelet[3241]: E0625 18:44:49.795375 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.795820 kubelet[3241]: E0625 18:44:49.795722 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.795820 kubelet[3241]: W0625 18:44:49.795736 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.795820 kubelet[3241]: E0625 18:44:49.795796 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.796535 kubelet[3241]: E0625 18:44:49.796251 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.796535 kubelet[3241]: W0625 18:44:49.796277 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.796535 kubelet[3241]: E0625 18:44:49.796377 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.796535 kubelet[3241]: E0625 18:44:49.796508 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.796535 kubelet[3241]: W0625 18:44:49.796518 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.797123 kubelet[3241]: E0625 18:44:49.796849 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.797123 kubelet[3241]: E0625 18:44:49.797073 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.797123 kubelet[3241]: W0625 18:44:49.797085 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.797644 kubelet[3241]: E0625 18:44:49.797380 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.797644 kubelet[3241]: E0625 18:44:49.797554 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.797644 kubelet[3241]: W0625 18:44:49.797565 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.797924 kubelet[3241]: E0625 18:44:49.797822 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.798133 kubelet[3241]: E0625 18:44:49.798036 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.798133 kubelet[3241]: W0625 18:44:49.798050 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.798420 kubelet[3241]: E0625 18:44:49.798297 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.798604 kubelet[3241]: E0625 18:44:49.798547 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.798604 kubelet[3241]: W0625 18:44:49.798558 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.798856 kubelet[3241]: E0625 18:44:49.798805 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.799122 kubelet[3241]: E0625 18:44:49.798981 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.799122 kubelet[3241]: W0625 18:44:49.798990 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.799317 kubelet[3241]: E0625 18:44:49.799297 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.799689 kubelet[3241]: E0625 18:44:49.799533 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.799944 kubelet[3241]: W0625 18:44:49.799781 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.799944 kubelet[3241]: E0625 18:44:49.799924 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.800398 kubelet[3241]: E0625 18:44:49.800242 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.800398 kubelet[3241]: W0625 18:44:49.800265 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.800629 kubelet[3241]: E0625 18:44:49.800492 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.800978 kubelet[3241]: E0625 18:44:49.800814 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.800978 kubelet[3241]: W0625 18:44:49.800832 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.801335 kubelet[3241]: E0625 18:44:49.801097 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.801335 kubelet[3241]: E0625 18:44:49.801178 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.801335 kubelet[3241]: W0625 18:44:49.801186 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.801335 kubelet[3241]: E0625 18:44:49.801263 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.801911 kubelet[3241]: E0625 18:44:49.801665 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.801911 kubelet[3241]: W0625 18:44:49.801681 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.801911 kubelet[3241]: E0625 18:44:49.801765 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.801911 kubelet[3241]: E0625 18:44:49.801888 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.801911 kubelet[3241]: W0625 18:44:49.801896 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.802344 kubelet[3241]: E0625 18:44:49.802110 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.802902 kubelet[3241]: E0625 18:44:49.802615 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.802902 kubelet[3241]: W0625 18:44:49.802639 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.802902 kubelet[3241]: E0625 18:44:49.802728 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.803460 kubelet[3241]: E0625 18:44:49.802977 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.803460 kubelet[3241]: W0625 18:44:49.802988 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.803460 kubelet[3241]: E0625 18:44:49.803127 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.807016 kubelet[3241]: E0625 18:44:49.806342 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.807016 kubelet[3241]: W0625 18:44:49.806358 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.807016 kubelet[3241]: E0625 18:44:49.806560 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.807016 kubelet[3241]: E0625 18:44:49.806716 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.807016 kubelet[3241]: W0625 18:44:49.806727 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.807016 kubelet[3241]: E0625 18:44:49.806851 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.807016 kubelet[3241]: E0625 18:44:49.806974 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.807016 kubelet[3241]: W0625 18:44:49.806982 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.807016 kubelet[3241]: E0625 18:44:49.806996 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.824115 kubelet[3241]: E0625 18:44:49.824098 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:49.824244 kubelet[3241]: W0625 18:44:49.824233 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:49.824469 kubelet[3241]: E0625 18:44:49.824451 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:49.831055 containerd[1716]: time="2024-06-25T18:44:49.830925454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:49.831283 containerd[1716]: time="2024-06-25T18:44:49.831211455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:49.832363 containerd[1716]: time="2024-06-25T18:44:49.832087355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:49.832444 containerd[1716]: time="2024-06-25T18:44:49.832367856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:49.854418 containerd[1716]: time="2024-06-25T18:44:49.854383275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9c454cc78-66ls8,Uid:6444057f-1b7c-4c51-a9c6-f097e94e922f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d7893977fd526585d69eead11179a7fcb15bac96103ba48886a45a351e7bd2f\"" Jun 25 18:44:49.858806 containerd[1716]: time="2024-06-25T18:44:49.857587978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:44:49.877730 systemd[1]: Started cri-containerd-4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984.scope - libcontainer container 4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984. Jun 25 18:44:49.918656 containerd[1716]: time="2024-06-25T18:44:49.918225530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2k7nb,Uid:81e73e0b-5f53-4b67-82ce-77339b72a50c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984\"" Jun 25 18:44:51.069303 kubelet[3241]: E0625 18:44:51.067293 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:44:53.066412 kubelet[3241]: E0625 18:44:53.066372 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:44:53.277938 containerd[1716]: time="2024-06-25T18:44:53.277891055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:53.281347 containerd[1716]: time="2024-06-25T18:44:53.281153158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:44:53.287246 containerd[1716]: time="2024-06-25T18:44:53.287103663Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:53.293648 containerd[1716]: time="2024-06-25T18:44:53.293612769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:53.294681 containerd[1716]: time="2024-06-25T18:44:53.294641370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.435630691s" Jun 25 18:44:53.294773 containerd[1716]: time="2024-06-25T18:44:53.294686570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:44:53.297264 containerd[1716]: time="2024-06-25T18:44:53.297237772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:44:53.329148 containerd[1716]: time="2024-06-25T18:44:53.328888700Z" level=info msg="CreateContainer within sandbox \"7d7893977fd526585d69eead11179a7fcb15bac96103ba48886a45a351e7bd2f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:44:53.382048 containerd[1716]: time="2024-06-25T18:44:53.381608646Z" level=info msg="CreateContainer within sandbox \"7d7893977fd526585d69eead11179a7fcb15bac96103ba48886a45a351e7bd2f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4475c2c69fafa6c96b94b0c143c1025035100e58bf28a52913ab2d09f8b198d7\"" Jun 25 18:44:53.385747 containerd[1716]: time="2024-06-25T18:44:53.383810047Z" level=info msg="StartContainer for \"4475c2c69fafa6c96b94b0c143c1025035100e58bf28a52913ab2d09f8b198d7\"" Jun 25 18:44:53.426443 systemd[1]: Started cri-containerd-4475c2c69fafa6c96b94b0c143c1025035100e58bf28a52913ab2d09f8b198d7.scope - libcontainer container 4475c2c69fafa6c96b94b0c143c1025035100e58bf28a52913ab2d09f8b198d7. Jun 25 18:44:53.475738 containerd[1716]: time="2024-06-25T18:44:53.475683827Z" level=info msg="StartContainer for \"4475c2c69fafa6c96b94b0c143c1025035100e58bf28a52913ab2d09f8b198d7\" returns successfully" Jun 25 18:44:54.175280 kubelet[3241]: I0625 18:44:54.175232 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-9c454cc78-66ls8" podStartSLOduration=1.734846041 podStartE2EDuration="5.175183836s" podCreationTimestamp="2024-06-25 18:44:49 +0000 UTC" firstStartedPulling="2024-06-25 18:44:49.856082476 +0000 UTC m=+19.883290883" lastFinishedPulling="2024-06-25 18:44:53.296420371 +0000 UTC m=+23.323628678" observedRunningTime="2024-06-25 18:44:54.174866036 +0000 UTC m=+24.202074343" watchObservedRunningTime="2024-06-25 18:44:54.175183836 +0000 UTC m=+24.202392243" Jun 25 18:44:54.207940 kubelet[3241]: E0625 18:44:54.207913 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.207940 kubelet[3241]: W0625 18:44:54.207932 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.208169 kubelet[3241]: E0625 18:44:54.207953 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.208250 kubelet[3241]: E0625 18:44:54.208199 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.208250 kubelet[3241]: W0625 18:44:54.208214 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.208250 kubelet[3241]: E0625 18:44:54.208231 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.208480 kubelet[3241]: E0625 18:44:54.208454 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.208480 kubelet[3241]: W0625 18:44:54.208476 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.208584 kubelet[3241]: E0625 18:44:54.208493 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.208720 kubelet[3241]: E0625 18:44:54.208704 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.208720 kubelet[3241]: W0625 18:44:54.208717 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.208852 kubelet[3241]: E0625 18:44:54.208733 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.208961 kubelet[3241]: E0625 18:44:54.208931 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.208961 kubelet[3241]: W0625 18:44:54.208941 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.208961 kubelet[3241]: E0625 18:44:54.208956 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.209158 kubelet[3241]: E0625 18:44:54.209130 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.209158 kubelet[3241]: W0625 18:44:54.209140 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.209158 kubelet[3241]: E0625 18:44:54.209155 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.209355 kubelet[3241]: E0625 18:44:54.209342 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.209436 kubelet[3241]: W0625 18:44:54.209355 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.209436 kubelet[3241]: E0625 18:44:54.209370 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.209588 kubelet[3241]: E0625 18:44:54.209548 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.209588 kubelet[3241]: W0625 18:44:54.209558 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.209588 kubelet[3241]: E0625 18:44:54.209574 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.209786 kubelet[3241]: E0625 18:44:54.209779 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.209856 kubelet[3241]: W0625 18:44:54.209790 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.209856 kubelet[3241]: E0625 18:44:54.209805 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.210009 kubelet[3241]: E0625 18:44:54.209977 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.210009 kubelet[3241]: W0625 18:44:54.209987 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.210009 kubelet[3241]: E0625 18:44:54.210001 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.210190 kubelet[3241]: E0625 18:44:54.210169 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.210190 kubelet[3241]: W0625 18:44:54.210179 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.210312 kubelet[3241]: E0625 18:44:54.210193 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.210396 kubelet[3241]: E0625 18:44:54.210386 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.210473 kubelet[3241]: W0625 18:44:54.210396 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.210473 kubelet[3241]: E0625 18:44:54.210412 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.210619 kubelet[3241]: E0625 18:44:54.210594 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.210619 kubelet[3241]: W0625 18:44:54.210604 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.210619 kubelet[3241]: E0625 18:44:54.210618 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.210881 kubelet[3241]: E0625 18:44:54.210828 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.210881 kubelet[3241]: W0625 18:44:54.210839 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.210881 kubelet[3241]: E0625 18:44:54.210854 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.211087 kubelet[3241]: E0625 18:44:54.211028 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.211087 kubelet[3241]: W0625 18:44:54.211038 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.211087 kubelet[3241]: E0625 18:44:54.211053 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.223336 kubelet[3241]: E0625 18:44:54.223314 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.223336 kubelet[3241]: W0625 18:44:54.223327 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.223510 kubelet[3241]: E0625 18:44:54.223343 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.223629 kubelet[3241]: E0625 18:44:54.223614 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.223629 kubelet[3241]: W0625 18:44:54.223626 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.223762 kubelet[3241]: E0625 18:44:54.223655 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.223926 kubelet[3241]: E0625 18:44:54.223910 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.223926 kubelet[3241]: W0625 18:44:54.223923 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.224060 kubelet[3241]: E0625 18:44:54.223946 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.224215 kubelet[3241]: E0625 18:44:54.224199 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.224215 kubelet[3241]: W0625 18:44:54.224212 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.224347 kubelet[3241]: E0625 18:44:54.224232 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.224463 kubelet[3241]: E0625 18:44:54.224450 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.224518 kubelet[3241]: W0625 18:44:54.224464 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.224518 kubelet[3241]: E0625 18:44:54.224493 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.224705 kubelet[3241]: E0625 18:44:54.224688 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.224705 kubelet[3241]: W0625 18:44:54.224701 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.224809 kubelet[3241]: E0625 18:44:54.224789 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.224950 kubelet[3241]: E0625 18:44:54.224935 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.224950 kubelet[3241]: W0625 18:44:54.224947 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.225058 kubelet[3241]: E0625 18:44:54.225035 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.225176 kubelet[3241]: E0625 18:44:54.225163 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.225176 kubelet[3241]: W0625 18:44:54.225174 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.225368 kubelet[3241]: E0625 18:44:54.225332 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.225451 kubelet[3241]: E0625 18:44:54.225435 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.225498 kubelet[3241]: W0625 18:44:54.225450 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.225498 kubelet[3241]: E0625 18:44:54.225470 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.225808 kubelet[3241]: E0625 18:44:54.225790 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.225808 kubelet[3241]: W0625 18:44:54.225805 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.225992 kubelet[3241]: E0625 18:44:54.225826 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.226049 kubelet[3241]: E0625 18:44:54.226016 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.226049 kubelet[3241]: W0625 18:44:54.226026 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.226049 kubelet[3241]: E0625 18:44:54.226046 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.226256 kubelet[3241]: E0625 18:44:54.226241 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.226256 kubelet[3241]: W0625 18:44:54.226254 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.226367 kubelet[3241]: E0625 18:44:54.226290 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.226625 kubelet[3241]: E0625 18:44:54.226609 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.226625 kubelet[3241]: W0625 18:44:54.226623 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.226744 kubelet[3241]: E0625 18:44:54.226643 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.226887 kubelet[3241]: E0625 18:44:54.226868 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.226887 kubelet[3241]: W0625 18:44:54.226881 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.227125 kubelet[3241]: E0625 18:44:54.226944 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.227125 kubelet[3241]: E0625 18:44:54.227085 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.227125 kubelet[3241]: W0625 18:44:54.227094 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.227336 kubelet[3241]: E0625 18:44:54.227310 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.227336 kubelet[3241]: W0625 18:44:54.227324 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.227484 kubelet[3241]: E0625 18:44:54.227338 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.227484 kubelet[3241]: E0625 18:44:54.227324 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.227586 kubelet[3241]: E0625 18:44:54.227555 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.227586 kubelet[3241]: W0625 18:44:54.227565 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.227586 kubelet[3241]: E0625 18:44:54.227581 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:54.228354 kubelet[3241]: E0625 18:44:54.228234 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:44:54.228354 kubelet[3241]: W0625 18:44:54.228249 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:44:54.228354 kubelet[3241]: E0625 18:44:54.228265 3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:44:55.041235 containerd[1716]: time="2024-06-25T18:44:55.040241936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:55.043383 containerd[1716]: time="2024-06-25T18:44:55.043307439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:44:55.048710 containerd[1716]: time="2024-06-25T18:44:55.047252244Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:55.054293 containerd[1716]: time="2024-06-25T18:44:55.052515549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:55.055395 containerd[1716]: time="2024-06-25T18:44:55.055043252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.75759818s" Jun 25 18:44:55.055522 containerd[1716]: time="2024-06-25T18:44:55.055501252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:44:55.058923 containerd[1716]: time="2024-06-25T18:44:55.058713355Z" level=info msg="CreateContainer within sandbox \"4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:44:55.067467 kubelet[3241]: E0625 18:44:55.067436 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:44:55.097048 containerd[1716]: time="2024-06-25T18:44:55.096945095Z" level=info msg="CreateContainer within sandbox \"4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7\"" Jun 25 18:44:55.098754 containerd[1716]: time="2024-06-25T18:44:55.097683796Z" level=info msg="StartContainer for \"df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7\"" Jun 25 18:44:55.132037 systemd[1]: run-containerd-runc-k8s.io-df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7-runc.fHKn9I.mount: Deactivated successfully. Jun 25 18:44:55.140435 systemd[1]: Started cri-containerd-df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7.scope - libcontainer container df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7. Jun 25 18:44:55.169307 containerd[1716]: time="2024-06-25T18:44:55.169244570Z" level=info msg="StartContainer for \"df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7\" returns successfully" Jun 25 18:44:55.173417 kubelet[3241]: I0625 18:44:55.173345 3241 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:44:55.181474 systemd[1]: cri-containerd-df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7.scope: Deactivated successfully. Jun 25 18:44:56.085795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7-rootfs.mount: Deactivated successfully. Jun 25 18:44:57.067226 kubelet[3241]: E0625 18:44:57.067172 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:44:58.360367 containerd[1716]: time="2024-06-25T18:44:58.360251791Z" level=info msg="shim disconnected" id=df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7 namespace=k8s.io Jun 25 18:44:58.361050 containerd[1716]: time="2024-06-25T18:44:58.360359491Z" level=warning msg="cleaning up after shim disconnected" id=df6ecbb60aa88a59ade1c1dd444d76773dd22c6c9d025f85e806269f7e7060b7 namespace=k8s.io Jun 25 18:44:58.361050 containerd[1716]: time="2024-06-25T18:44:58.360399091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:44:59.067717 kubelet[3241]: E0625 18:44:59.066442 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:44:59.183875 containerd[1716]: time="2024-06-25T18:44:59.183829648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:45:01.067896 kubelet[3241]: E0625 18:45:01.067459 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:45:03.066560 kubelet[3241]: E0625 18:45:03.066522 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:45:05.066560 kubelet[3241]: E0625 18:45:05.066503 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:45:05.974051 containerd[1716]: time="2024-06-25T18:45:05.973998772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:05.975899 containerd[1716]: time="2024-06-25T18:45:05.975835474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:45:05.979313 containerd[1716]: time="2024-06-25T18:45:05.979193478Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:05.983858 containerd[1716]: time="2024-06-25T18:45:05.983807082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:05.984655 containerd[1716]: time="2024-06-25T18:45:05.984509583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.800630235s" Jun 25 18:45:05.984655 containerd[1716]: time="2024-06-25T18:45:05.984548283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:45:05.987005 containerd[1716]: time="2024-06-25T18:45:05.986719485Z" level=info msg="CreateContainer within sandbox \"4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:45:06.023631 containerd[1716]: time="2024-06-25T18:45:06.023592023Z" level=info msg="CreateContainer within sandbox \"4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec\"" Jun 25 18:45:06.024299 containerd[1716]: time="2024-06-25T18:45:06.024059324Z" level=info msg="StartContainer for \"9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec\"" Jun 25 18:45:06.057423 systemd[1]: Started cri-containerd-9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec.scope - libcontainer container 9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec. Jun 25 18:45:06.089965 containerd[1716]: time="2024-06-25T18:45:06.089851192Z" level=info msg="StartContainer for \"9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec\" returns successfully" Jun 25 18:45:07.066746 kubelet[3241]: E0625 18:45:07.066699 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:45:07.655904 kubelet[3241]: I0625 18:45:07.655502 3241 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:45:09.066925 kubelet[3241]: E0625 18:45:09.066681 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:45:09.333501 containerd[1716]: time="2024-06-25T18:45:09.333363632Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:45:09.335686 systemd[1]: cri-containerd-9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec.scope: Deactivated successfully. Jun 25 18:45:09.356765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec-rootfs.mount: Deactivated successfully. Jun 25 18:45:09.381095 kubelet[3241]: I0625 18:45:09.380250 3241 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:45:10.563358 kubelet[3241]: I0625 18:45:09.406829 3241 topology_manager.go:215] "Topology Admit Handler" podUID="7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a" podNamespace="kube-system" podName="coredns-76f75df574-pmk7s" Jun 25 18:45:10.563358 kubelet[3241]: I0625 18:45:09.423172 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z885z\" (UniqueName: \"kubernetes.io/projected/7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a-kube-api-access-z885z\") pod \"coredns-76f75df574-pmk7s\" (UID: \"7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a\") " pod="kube-system/coredns-76f75df574-pmk7s" Jun 25 18:45:10.563358 kubelet[3241]: I0625 18:45:09.423218 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a-config-volume\") pod \"coredns-76f75df574-pmk7s\" (UID: \"7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a\") " pod="kube-system/coredns-76f75df574-pmk7s" Jun 25 18:45:10.563358 kubelet[3241]: I0625 18:45:09.484543 3241 topology_manager.go:215] "Topology Admit Handler" podUID="6b78a687-8f7f-4336-ad30-99e97b06a846" podNamespace="kube-system" podName="coredns-76f75df574-f5rm6" Jun 25 18:45:10.563358 kubelet[3241]: I0625 18:45:09.484736 3241 topology_manager.go:215] "Topology Admit Handler" podUID="ac43ad38-8dbb-4f6c-a78e-0a97e02beaac" podNamespace="calico-system" podName="calico-kube-controllers-5d848d5b8b-xwq6w" Jun 25 18:45:10.563358 kubelet[3241]: I0625 18:45:09.524001 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b78a687-8f7f-4336-ad30-99e97b06a846-config-volume\") pod \"coredns-76f75df574-f5rm6\" (UID: \"6b78a687-8f7f-4336-ad30-99e97b06a846\") " pod="kube-system/coredns-76f75df574-f5rm6" Jun 25 18:45:09.415288 systemd[1]: Created slice kubepods-burstable-pod7f1af524_d05f_4a7f_a4e5_c3f8caa1da7a.slice - libcontainer container kubepods-burstable-pod7f1af524_d05f_4a7f_a4e5_c3f8caa1da7a.slice. Jun 25 18:45:10.564258 kubelet[3241]: I0625 18:45:09.524047 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgn4q\" (UniqueName: \"kubernetes.io/projected/ac43ad38-8dbb-4f6c-a78e-0a97e02beaac-kube-api-access-dgn4q\") pod \"calico-kube-controllers-5d848d5b8b-xwq6w\" (UID: \"ac43ad38-8dbb-4f6c-a78e-0a97e02beaac\") " pod="calico-system/calico-kube-controllers-5d848d5b8b-xwq6w" Jun 25 18:45:10.564258 kubelet[3241]: I0625 18:45:09.524093 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac43ad38-8dbb-4f6c-a78e-0a97e02beaac-tigera-ca-bundle\") pod \"calico-kube-controllers-5d848d5b8b-xwq6w\" (UID: \"ac43ad38-8dbb-4f6c-a78e-0a97e02beaac\") " pod="calico-system/calico-kube-controllers-5d848d5b8b-xwq6w" Jun 25 18:45:10.564258 kubelet[3241]: I0625 18:45:09.524124 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dtkr\" (UniqueName: \"kubernetes.io/projected/6b78a687-8f7f-4336-ad30-99e97b06a846-kube-api-access-5dtkr\") pod \"coredns-76f75df574-f5rm6\" (UID: \"6b78a687-8f7f-4336-ad30-99e97b06a846\") " pod="kube-system/coredns-76f75df574-f5rm6" Jun 25 18:45:09.495501 systemd[1]: Created slice kubepods-besteffort-podac43ad38_8dbb_4f6c_a78e_0a97e02beaac.slice - libcontainer container kubepods-besteffort-podac43ad38_8dbb_4f6c_a78e_0a97e02beaac.slice. Jun 25 18:45:09.501825 systemd[1]: Created slice kubepods-burstable-pod6b78a687_8f7f_4336_ad30_99e97b06a846.slice - libcontainer container kubepods-burstable-pod6b78a687_8f7f_4336_ad30_99e97b06a846.slice. Jun 25 18:45:10.865092 containerd[1716]: time="2024-06-25T18:45:10.864822845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pmk7s,Uid:7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:10.868137 containerd[1716]: time="2024-06-25T18:45:10.868066848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d848d5b8b-xwq6w,Uid:ac43ad38-8dbb-4f6c-a78e-0a97e02beaac,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:10.873126 containerd[1716]: time="2024-06-25T18:45:10.873089753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5rm6,Uid:6b78a687-8f7f-4336-ad30-99e97b06a846,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:11.072057 systemd[1]: Created slice kubepods-besteffort-pod459a93ef_d13a_41c8_9d3f_50f9914d6a1a.slice - libcontainer container kubepods-besteffort-pod459a93ef_d13a_41c8_9d3f_50f9914d6a1a.slice. Jun 25 18:45:11.074508 containerd[1716]: time="2024-06-25T18:45:11.074457741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-42k4m,Uid:459a93ef-d13a-41c8-9d3f-50f9914d6a1a,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:13.462529 containerd[1716]: time="2024-06-25T18:45:13.462449174Z" level=info msg="shim disconnected" id=9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec namespace=k8s.io Jun 25 18:45:13.462529 containerd[1716]: time="2024-06-25T18:45:13.462520374Z" level=warning msg="cleaning up after shim disconnected" id=9c57025023c317a1ddc0dedea4e0e053e54bc1570236bed849b30a6c07982eec namespace=k8s.io Jun 25 18:45:13.462529 containerd[1716]: time="2024-06-25T18:45:13.462534074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:13.724677 containerd[1716]: time="2024-06-25T18:45:13.724428919Z" level=error msg="Failed to destroy network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.725322 containerd[1716]: time="2024-06-25T18:45:13.725120319Z" level=error msg="encountered an error cleaning up failed sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.725322 containerd[1716]: time="2024-06-25T18:45:13.725199219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-42k4m,Uid:459a93ef-d13a-41c8-9d3f-50f9914d6a1a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.725637 kubelet[3241]: E0625 18:45:13.725607 3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.726816 kubelet[3241]: E0625 18:45:13.726659 3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-42k4m" Jun 25 18:45:13.726816 kubelet[3241]: E0625 18:45:13.726705 3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-42k4m" Jun 25 18:45:13.726816 kubelet[3241]: E0625 18:45:13.726785 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-42k4m_calico-system(459a93ef-d13a-41c8-9d3f-50f9914d6a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-42k4m_calico-system(459a93ef-d13a-41c8-9d3f-50f9914d6a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:45:13.727366 containerd[1716]: time="2024-06-25T18:45:13.727216221Z" level=error msg="Failed to destroy network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.728719 containerd[1716]: time="2024-06-25T18:45:13.728578422Z" level=error msg="encountered an error cleaning up failed sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.728719 containerd[1716]: time="2024-06-25T18:45:13.728642523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pmk7s,Uid:7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.729424 kubelet[3241]: E0625 18:45:13.729391 3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.729786 kubelet[3241]: E0625 18:45:13.729538 3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pmk7s" Jun 25 18:45:13.729786 kubelet[3241]: E0625 18:45:13.729574 3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-pmk7s" Jun 25 18:45:13.729978 kubelet[3241]: E0625 18:45:13.729941 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pmk7s_kube-system(7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pmk7s_kube-system(7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pmk7s" podUID="7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a" Jun 25 18:45:13.731390 containerd[1716]: time="2024-06-25T18:45:13.731338725Z" level=error msg="Failed to destroy network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.731770 containerd[1716]: time="2024-06-25T18:45:13.731737425Z" level=error msg="encountered an error cleaning up failed sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.731897 containerd[1716]: time="2024-06-25T18:45:13.731872226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d848d5b8b-xwq6w,Uid:ac43ad38-8dbb-4f6c-a78e-0a97e02beaac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.732353 kubelet[3241]: E0625 18:45:13.732320 3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.732732 kubelet[3241]: E0625 18:45:13.732366 3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d848d5b8b-xwq6w" Jun 25 18:45:13.732732 kubelet[3241]: E0625 18:45:13.732392 3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d848d5b8b-xwq6w" Jun 25 18:45:13.732732 kubelet[3241]: E0625 18:45:13.732446 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d848d5b8b-xwq6w_calico-system(ac43ad38-8dbb-4f6c-a78e-0a97e02beaac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d848d5b8b-xwq6w_calico-system(ac43ad38-8dbb-4f6c-a78e-0a97e02beaac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d848d5b8b-xwq6w" podUID="ac43ad38-8dbb-4f6c-a78e-0a97e02beaac" Jun 25 18:45:13.732924 containerd[1716]: time="2024-06-25T18:45:13.732816226Z" level=error msg="Failed to destroy network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.733375 containerd[1716]: time="2024-06-25T18:45:13.733253727Z" level=error msg="encountered an error cleaning up failed sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.733513 containerd[1716]: time="2024-06-25T18:45:13.733368927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5rm6,Uid:6b78a687-8f7f-4336-ad30-99e97b06a846,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.733579 kubelet[3241]: E0625 18:45:13.733552 3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:13.733630 kubelet[3241]: E0625 18:45:13.733602 3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f5rm6" Jun 25 18:45:13.733630 kubelet[3241]: E0625 18:45:13.733627 3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f5rm6" Jun 25 18:45:13.733712 kubelet[3241]: E0625 18:45:13.733676 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f5rm6_kube-system(6b78a687-8f7f-4336-ad30-99e97b06a846)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f5rm6_kube-system(6b78a687-8f7f-4336-ad30-99e97b06a846)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f5rm6" podUID="6b78a687-8f7f-4336-ad30-99e97b06a846" Jun 25 18:45:14.218210 containerd[1716]: time="2024-06-25T18:45:14.218156180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:45:14.223141 kubelet[3241]: I0625 18:45:14.223115 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:14.226500 containerd[1716]: time="2024-06-25T18:45:14.226313088Z" level=info msg="StopPodSandbox for \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\"" Jun 25 18:45:14.227621 containerd[1716]: time="2024-06-25T18:45:14.227483989Z" level=info msg="Ensure that sandbox 26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b in task-service has been cleanup successfully" Jun 25 18:45:14.230468 kubelet[3241]: I0625 18:45:14.230070 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:14.230768 containerd[1716]: time="2024-06-25T18:45:14.230735692Z" level=info msg="StopPodSandbox for \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\"" Jun 25 18:45:14.231554 containerd[1716]: time="2024-06-25T18:45:14.231526793Z" level=info msg="Ensure that sandbox 4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20 in task-service has been cleanup successfully" Jun 25 18:45:14.240379 kubelet[3241]: I0625 18:45:14.238591 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:14.240676 containerd[1716]: time="2024-06-25T18:45:14.240647301Z" level=info msg="StopPodSandbox for \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\"" Jun 25 18:45:14.240966 containerd[1716]: time="2024-06-25T18:45:14.240940001Z" level=info msg="Ensure that sandbox 6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112 in task-service has been cleanup successfully" Jun 25 18:45:14.244602 kubelet[3241]: I0625 18:45:14.244581 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:14.248011 containerd[1716]: time="2024-06-25T18:45:14.245425306Z" level=info msg="StopPodSandbox for \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\"" Jun 25 18:45:14.248175 containerd[1716]: time="2024-06-25T18:45:14.248146708Z" level=info msg="Ensure that sandbox bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008 in task-service has been cleanup successfully" Jun 25 18:45:14.318277 containerd[1716]: time="2024-06-25T18:45:14.317359673Z" level=error msg="StopPodSandbox for \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\" failed" error="failed to destroy network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:14.318443 containerd[1716]: time="2024-06-25T18:45:14.318303874Z" level=error msg="StopPodSandbox for \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\" failed" error="failed to destroy network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:14.319113 kubelet[3241]: E0625 18:45:14.318724 3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:14.319113 kubelet[3241]: E0625 18:45:14.318819 3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b"} Jun 25 18:45:14.319113 kubelet[3241]: E0625 18:45:14.318874 3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:14.319113 kubelet[3241]: E0625 18:45:14.318918 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-pmk7s" podUID="7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a" Jun 25 18:45:14.319493 kubelet[3241]: E0625 18:45:14.318967 3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:14.319493 kubelet[3241]: E0625 18:45:14.318986 3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008"} Jun 25 18:45:14.319493 kubelet[3241]: E0625 18:45:14.319030 3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac43ad38-8dbb-4f6c-a78e-0a97e02beaac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:14.319493 kubelet[3241]: E0625 18:45:14.319064 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac43ad38-8dbb-4f6c-a78e-0a97e02beaac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d848d5b8b-xwq6w" podUID="ac43ad38-8dbb-4f6c-a78e-0a97e02beaac" Jun 25 18:45:14.323172 containerd[1716]: time="2024-06-25T18:45:14.323109978Z" level=error msg="StopPodSandbox for \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\" failed" error="failed to destroy network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:14.323642 kubelet[3241]: E0625 18:45:14.323493 3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:14.323642 kubelet[3241]: E0625 18:45:14.323536 3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112"} Jun 25 18:45:14.323642 kubelet[3241]: E0625 18:45:14.323578 3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b78a687-8f7f-4336-ad30-99e97b06a846\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:14.323642 kubelet[3241]: E0625 18:45:14.323620 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b78a687-8f7f-4336-ad30-99e97b06a846\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f5rm6" podUID="6b78a687-8f7f-4336-ad30-99e97b06a846" Jun 25 18:45:14.326457 containerd[1716]: time="2024-06-25T18:45:14.326418181Z" level=error msg="StopPodSandbox for \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\" failed" error="failed to destroy network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:14.326637 kubelet[3241]: E0625 18:45:14.326596 3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:14.326716 kubelet[3241]: E0625 18:45:14.326653 3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20"} Jun 25 18:45:14.326769 kubelet[3241]: E0625 18:45:14.326720 3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"459a93ef-d13a-41c8-9d3f-50f9914d6a1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:14.326769 kubelet[3241]: E0625 18:45:14.326758 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"459a93ef-d13a-41c8-9d3f-50f9914d6a1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-42k4m" podUID="459a93ef-d13a-41c8-9d3f-50f9914d6a1a" Jun 25 18:45:14.547229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20-shm.mount: Deactivated successfully. Jun 25 18:45:14.547356 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112-shm.mount: Deactivated successfully. Jun 25 18:45:14.547438 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b-shm.mount: Deactivated successfully. Jun 25 18:45:23.803548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552894541.mount: Deactivated successfully. Jun 25 18:45:23.868117 containerd[1716]: time="2024-06-25T18:45:23.868055849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:23.873509 containerd[1716]: time="2024-06-25T18:45:23.873449354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:45:23.877162 containerd[1716]: time="2024-06-25T18:45:23.877102857Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:23.882486 containerd[1716]: time="2024-06-25T18:45:23.882451862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:23.883249 containerd[1716]: time="2024-06-25T18:45:23.883209662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 9.665002282s" Jun 25 18:45:23.883360 containerd[1716]: time="2024-06-25T18:45:23.883254762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:45:23.895743 containerd[1716]: time="2024-06-25T18:45:23.895717274Z" level=info msg="CreateContainer within sandbox \"4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:45:23.941741 containerd[1716]: time="2024-06-25T18:45:23.941699514Z" level=info msg="CreateContainer within sandbox \"4fcc843d27c3f140c453f43a49070058e220589e083bdbe6d79c730abab79984\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a60dacbcb85b605e440e11f5c3a0be4245363cdb71ccaa9fa1b6f61ea5e06f85\"" Jun 25 18:45:23.942418 containerd[1716]: time="2024-06-25T18:45:23.942369115Z" level=info msg="StartContainer for \"a60dacbcb85b605e440e11f5c3a0be4245363cdb71ccaa9fa1b6f61ea5e06f85\"" Jun 25 18:45:23.968459 systemd[1]: Started cri-containerd-a60dacbcb85b605e440e11f5c3a0be4245363cdb71ccaa9fa1b6f61ea5e06f85.scope - libcontainer container a60dacbcb85b605e440e11f5c3a0be4245363cdb71ccaa9fa1b6f61ea5e06f85. Jun 25 18:45:24.017519 containerd[1716]: time="2024-06-25T18:45:24.017449982Z" level=info msg="StartContainer for \"a60dacbcb85b605e440e11f5c3a0be4245363cdb71ccaa9fa1b6f61ea5e06f85\" returns successfully" Jun 25 18:45:24.307896 kubelet[3241]: I0625 18:45:24.307611 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2k7nb" podStartSLOduration=1.343918809 podStartE2EDuration="35.30756104s" podCreationTimestamp="2024-06-25 18:44:49 +0000 UTC" firstStartedPulling="2024-06-25 18:44:49.919913132 +0000 UTC m=+19.947121539" lastFinishedPulling="2024-06-25 18:45:23.883555463 +0000 UTC m=+53.910763770" observedRunningTime="2024-06-25 18:45:24.306466639 +0000 UTC m=+54.333674946" watchObservedRunningTime="2024-06-25 18:45:24.30756104 +0000 UTC m=+54.334769447" Jun 25 18:45:24.480821 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:45:24.480957 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:45:25.068338 containerd[1716]: time="2024-06-25T18:45:25.068294517Z" level=info msg="StopPodSandbox for \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\"" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.191 [INFO][4273] k8s.go 608: Cleaning up netns ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.191 [INFO][4273] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" iface="eth0" netns="/var/run/netns/cni-d1f05077-77f9-e815-165c-f64bd3ae4537" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.192 [INFO][4273] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" iface="eth0" netns="/var/run/netns/cni-d1f05077-77f9-e815-165c-f64bd3ae4537" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.192 [INFO][4273] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" iface="eth0" netns="/var/run/netns/cni-d1f05077-77f9-e815-165c-f64bd3ae4537" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.192 [INFO][4273] k8s.go 615: Releasing IP address(es) ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.192 [INFO][4273] utils.go 188: Calico CNI releasing IP address ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.222 [INFO][4279] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.222 [INFO][4279] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.223 [INFO][4279] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.233 [WARNING][4279] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.233 [INFO][4279] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.235 [INFO][4279] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:25.241297 containerd[1716]: 2024-06-25 18:45:25.237 [INFO][4273] k8s.go 621: Teardown processing complete. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:25.242231 systemd[1]: run-netns-cni\x2dd1f05077\x2d77f9\x2de815\x2d165c\x2df64bd3ae4537.mount: Deactivated successfully. Jun 25 18:45:25.243675 containerd[1716]: time="2024-06-25T18:45:25.243495973Z" level=info msg="TearDown network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\" successfully" Jun 25 18:45:25.243675 containerd[1716]: time="2024-06-25T18:45:25.243534173Z" level=info msg="StopPodSandbox for \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\" returns successfully" Jun 25 18:45:25.244557 containerd[1716]: time="2024-06-25T18:45:25.244524874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-42k4m,Uid:459a93ef-d13a-41c8-9d3f-50f9914d6a1a,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:25.466980 systemd-networkd[1359]: cali0c71902af10: Link UP Jun 25 18:45:25.468264 systemd-networkd[1359]: cali0c71902af10: Gained carrier Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.322 [INFO][4297] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.332 [INFO][4297] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0 csi-node-driver- calico-system 459a93ef-d13a-41c8-9d3f-50f9914d6a1a 742 0 2024-06-25 18:44:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.0.0-a-d50f1c7422 csi-node-driver-42k4m eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali0c71902af10 [] []}} ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Namespace="calico-system" Pod="csi-node-driver-42k4m" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.332 [INFO][4297] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Namespace="calico-system" Pod="csi-node-driver-42k4m" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.365 [INFO][4307] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" HandleID="k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.373 [INFO][4307] ipam_plugin.go 264: Auto assigning IP ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" HandleID="k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318300), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-d50f1c7422", "pod":"csi-node-driver-42k4m", "timestamp":"2024-06-25 18:45:25.365842082 +0000 UTC"}, Hostname:"ci-4012.0.0-a-d50f1c7422", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.373 [INFO][4307] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.373 [INFO][4307] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.373 [INFO][4307] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-d50f1c7422' Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.375 [INFO][4307] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.378 [INFO][4307] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.382 [INFO][4307] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.383 [INFO][4307] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.385 [INFO][4307] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.385 [INFO][4307] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.387 [INFO][4307] ipam.go 1685: Creating new handle: k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.390 [INFO][4307] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.437 [INFO][4307] ipam.go 1216: Successfully claimed IPs: [192.168.40.1/26] block=192.168.40.0/26 handle="k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.437 [INFO][4307] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.1/26] handle="k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.437 [INFO][4307] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:25.487458 containerd[1716]: 2024-06-25 18:45:25.437 [INFO][4307] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.40.1/26] IPv6=[] ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" HandleID="k8s-pod-network.02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.489497 containerd[1716]: 2024-06-25 18:45:25.441 [INFO][4297] k8s.go 386: Populated endpoint ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Namespace="calico-system" Pod="csi-node-driver-42k4m" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"459a93ef-d13a-41c8-9d3f-50f9914d6a1a", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"", Pod:"csi-node-driver-42k4m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0c71902af10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:25.489497 containerd[1716]: 2024-06-25 18:45:25.441 [INFO][4297] k8s.go 387: Calico CNI using IPs: [192.168.40.1/32] ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Namespace="calico-system" Pod="csi-node-driver-42k4m" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.489497 containerd[1716]: 2024-06-25 18:45:25.441 [INFO][4297] dataplane_linux.go 68: Setting the host side veth name to cali0c71902af10 ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Namespace="calico-system" Pod="csi-node-driver-42k4m" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.489497 containerd[1716]: 2024-06-25 18:45:25.468 [INFO][4297] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Namespace="calico-system" Pod="csi-node-driver-42k4m" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.489497 containerd[1716]: 2024-06-25 18:45:25.468 [INFO][4297] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Namespace="calico-system" Pod="csi-node-driver-42k4m" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"459a93ef-d13a-41c8-9d3f-50f9914d6a1a", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b", Pod:"csi-node-driver-42k4m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0c71902af10", MAC:"8e:95:fa:1a:bb:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:25.489497 containerd[1716]: 2024-06-25 18:45:25.483 [INFO][4297] k8s.go 500: Wrote updated endpoint to datastore ContainerID="02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b" Namespace="calico-system" Pod="csi-node-driver-42k4m" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:25.519800 containerd[1716]: time="2024-06-25T18:45:25.519618419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:25.519800 containerd[1716]: time="2024-06-25T18:45:25.519680219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:25.519800 containerd[1716]: time="2024-06-25T18:45:25.519706219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:25.520146 containerd[1716]: time="2024-06-25T18:45:25.519812019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:25.545428 systemd[1]: Started cri-containerd-02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b.scope - libcontainer container 02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b. Jun 25 18:45:25.571052 containerd[1716]: time="2024-06-25T18:45:25.571009965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-42k4m,Uid:459a93ef-d13a-41c8-9d3f-50f9914d6a1a,Namespace:calico-system,Attempt:1,} returns sandbox id \"02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b\"" Jun 25 18:45:25.572949 containerd[1716]: time="2024-06-25T18:45:25.572672167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:45:26.776028 systemd-networkd[1359]: vxlan.calico: Link UP Jun 25 18:45:26.776041 systemd-networkd[1359]: vxlan.calico: Gained carrier Jun 25 18:45:27.000468 systemd-networkd[1359]: cali0c71902af10: Gained IPv6LL Jun 25 18:45:27.592496 containerd[1716]: time="2024-06-25T18:45:27.592439123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:27.594913 containerd[1716]: time="2024-06-25T18:45:27.594845225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:45:27.598464 containerd[1716]: time="2024-06-25T18:45:27.598407128Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:27.603453 containerd[1716]: time="2024-06-25T18:45:27.603397031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:27.604284 containerd[1716]: time="2024-06-25T18:45:27.604055132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.031339565s" Jun 25 18:45:27.604284 containerd[1716]: time="2024-06-25T18:45:27.604092432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:45:27.606528 containerd[1716]: time="2024-06-25T18:45:27.606499734Z" level=info msg="CreateContainer within sandbox \"02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:45:27.645706 containerd[1716]: time="2024-06-25T18:45:27.645664965Z" level=info msg="CreateContainer within sandbox \"02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f8ee361f18efac0212e5af0385af6a8d9fa51355d31335684488e77a42d3af15\"" Jun 25 18:45:27.646298 containerd[1716]: time="2024-06-25T18:45:27.646260265Z" level=info msg="StartContainer for \"f8ee361f18efac0212e5af0385af6a8d9fa51355d31335684488e77a42d3af15\"" Jun 25 18:45:27.679600 systemd[1]: Started cri-containerd-f8ee361f18efac0212e5af0385af6a8d9fa51355d31335684488e77a42d3af15.scope - libcontainer container f8ee361f18efac0212e5af0385af6a8d9fa51355d31335684488e77a42d3af15. Jun 25 18:45:27.706865 containerd[1716]: time="2024-06-25T18:45:27.706826213Z" level=info msg="StartContainer for \"f8ee361f18efac0212e5af0385af6a8d9fa51355d31335684488e77a42d3af15\" returns successfully" Jun 25 18:45:27.708030 containerd[1716]: time="2024-06-25T18:45:27.707986314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:45:28.024468 systemd-networkd[1359]: vxlan.calico: Gained IPv6LL Jun 25 18:45:28.069092 containerd[1716]: time="2024-06-25T18:45:28.068908298Z" level=info msg="StopPodSandbox for \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\"" Jun 25 18:45:28.069466 containerd[1716]: time="2024-06-25T18:45:28.069241898Z" level=info msg="StopPodSandbox for \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\"" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.136 [INFO][4612] k8s.go 608: Cleaning up netns ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.137 [INFO][4612] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" iface="eth0" netns="/var/run/netns/cni-05208875-ab36-ae9e-df74-95cb697ee575" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.137 [INFO][4612] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" iface="eth0" netns="/var/run/netns/cni-05208875-ab36-ae9e-df74-95cb697ee575" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.137 [INFO][4612] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" iface="eth0" netns="/var/run/netns/cni-05208875-ab36-ae9e-df74-95cb697ee575" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.137 [INFO][4612] k8s.go 615: Releasing IP address(es) ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.137 [INFO][4612] utils.go 188: Calico CNI releasing IP address ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.160 [INFO][4632] ipam_plugin.go 411: Releasing address using handleID ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.161 [INFO][4632] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.161 [INFO][4632] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.183 [WARNING][4632] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.184 [INFO][4632] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.188 [INFO][4632] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:28.191446 containerd[1716]: 2024-06-25 18:45:28.189 [INFO][4612] k8s.go 621: Teardown processing complete. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:28.193909 containerd[1716]: time="2024-06-25T18:45:28.193407796Z" level=info msg="TearDown network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\" successfully" Jun 25 18:45:28.193909 containerd[1716]: time="2024-06-25T18:45:28.193446396Z" level=info msg="StopPodSandbox for \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\" returns successfully" Jun 25 18:45:28.194651 containerd[1716]: time="2024-06-25T18:45:28.194493797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d848d5b8b-xwq6w,Uid:ac43ad38-8dbb-4f6c-a78e-0a97e02beaac,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:28.196123 systemd[1]: run-netns-cni\x2d05208875\x2dab36\x2dae9e\x2ddf74\x2d95cb697ee575.mount: Deactivated successfully. Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.184 [INFO][4623] k8s.go 608: Cleaning up netns ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.185 [INFO][4623] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" iface="eth0" netns="/var/run/netns/cni-d1ac178a-325a-01d3-5e9b-0ce8ffedde48" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.185 [INFO][4623] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" iface="eth0" netns="/var/run/netns/cni-d1ac178a-325a-01d3-5e9b-0ce8ffedde48" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.186 [INFO][4623] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" iface="eth0" netns="/var/run/netns/cni-d1ac178a-325a-01d3-5e9b-0ce8ffedde48" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.186 [INFO][4623] k8s.go 615: Releasing IP address(es) ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.186 [INFO][4623] utils.go 188: Calico CNI releasing IP address ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.212 [INFO][4638] ipam_plugin.go 411: Releasing address using handleID ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.213 [INFO][4638] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.213 [INFO][4638] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.218 [WARNING][4638] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.218 [INFO][4638] ipam_plugin.go 439: Releasing address using workloadID ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.220 [INFO][4638] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:28.223211 containerd[1716]: 2024-06-25 18:45:28.221 [INFO][4623] k8s.go 621: Teardown processing complete. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:28.225317 containerd[1716]: time="2024-06-25T18:45:28.223415120Z" level=info msg="TearDown network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\" successfully" Jun 25 18:45:28.225317 containerd[1716]: time="2024-06-25T18:45:28.223442020Z" level=info msg="StopPodSandbox for \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\" returns successfully" Jun 25 18:45:28.225317 containerd[1716]: time="2024-06-25T18:45:28.224092220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pmk7s,Uid:7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:28.369297 systemd-networkd[1359]: cali057f7027ce6: Link UP Jun 25 18:45:28.371553 systemd-networkd[1359]: cali057f7027ce6: Gained carrier Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.265 [INFO][4644] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0 calico-kube-controllers-5d848d5b8b- calico-system ac43ad38-8dbb-4f6c-a78e-0a97e02beaac 759 0 2024-06-25 18:44:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d848d5b8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.0.0-a-d50f1c7422 calico-kube-controllers-5d848d5b8b-xwq6w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali057f7027ce6 [] []}} ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Namespace="calico-system" Pod="calico-kube-controllers-5d848d5b8b-xwq6w" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.265 [INFO][4644] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Namespace="calico-system" Pod="calico-kube-controllers-5d848d5b8b-xwq6w" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.315 [INFO][4666] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" HandleID="k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.329 [INFO][4666] ipam_plugin.go 264: Auto assigning IP ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" HandleID="k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-d50f1c7422", "pod":"calico-kube-controllers-5d848d5b8b-xwq6w", "timestamp":"2024-06-25 18:45:28.315745992 +0000 UTC"}, Hostname:"ci-4012.0.0-a-d50f1c7422", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.330 [INFO][4666] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.330 [INFO][4666] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.330 [INFO][4666] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-d50f1c7422' Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.332 [INFO][4666] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.338 [INFO][4666] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.343 [INFO][4666] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.345 [INFO][4666] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.348 [INFO][4666] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.348 [INFO][4666] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.350 [INFO][4666] ipam.go 1685: Creating new handle: k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.355 [INFO][4666] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.362 [INFO][4666] ipam.go 1216: Successfully claimed IPs: [192.168.40.2/26] block=192.168.40.0/26 handle="k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.362 [INFO][4666] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.2/26] handle="k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.362 [INFO][4666] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:28.392868 containerd[1716]: 2024-06-25 18:45:28.362 [INFO][4666] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.40.2/26] IPv6=[] ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" HandleID="k8s-pod-network.3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.395382 containerd[1716]: 2024-06-25 18:45:28.364 [INFO][4644] k8s.go 386: Populated endpoint ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Namespace="calico-system" Pod="calico-kube-controllers-5d848d5b8b-xwq6w" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0", GenerateName:"calico-kube-controllers-5d848d5b8b-", Namespace:"calico-system", SelfLink:"", UID:"ac43ad38-8dbb-4f6c-a78e-0a97e02beaac", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d848d5b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"", Pod:"calico-kube-controllers-5d848d5b8b-xwq6w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali057f7027ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:28.395382 containerd[1716]: 2024-06-25 18:45:28.364 [INFO][4644] k8s.go 387: Calico CNI using IPs: [192.168.40.2/32] ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Namespace="calico-system" Pod="calico-kube-controllers-5d848d5b8b-xwq6w" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.395382 containerd[1716]: 2024-06-25 18:45:28.364 [INFO][4644] dataplane_linux.go 68: Setting the host side veth name to cali057f7027ce6 ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Namespace="calico-system" Pod="calico-kube-controllers-5d848d5b8b-xwq6w" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.395382 containerd[1716]: 2024-06-25 18:45:28.372 [INFO][4644] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Namespace="calico-system" Pod="calico-kube-controllers-5d848d5b8b-xwq6w" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.395382 containerd[1716]: 2024-06-25 18:45:28.373 [INFO][4644] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Namespace="calico-system" Pod="calico-kube-controllers-5d848d5b8b-xwq6w" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0", GenerateName:"calico-kube-controllers-5d848d5b8b-", Namespace:"calico-system", SelfLink:"", UID:"ac43ad38-8dbb-4f6c-a78e-0a97e02beaac", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d848d5b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc", Pod:"calico-kube-controllers-5d848d5b8b-xwq6w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali057f7027ce6", MAC:"2a:f7:75:8f:69:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:28.395382 containerd[1716]: 2024-06-25 18:45:28.386 [INFO][4644] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc" Namespace="calico-system" Pod="calico-kube-controllers-5d848d5b8b-xwq6w" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:28.421137 containerd[1716]: time="2024-06-25T18:45:28.420963075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:28.421137 containerd[1716]: time="2024-06-25T18:45:28.421038475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:28.421137 containerd[1716]: time="2024-06-25T18:45:28.421067175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:28.421137 containerd[1716]: time="2024-06-25T18:45:28.421086575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:28.444746 systemd[1]: Started cri-containerd-3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc.scope - libcontainer container 3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc. Jun 25 18:45:28.470918 systemd-networkd[1359]: cali6252da30889: Link UP Jun 25 18:45:28.477678 systemd-networkd[1359]: cali6252da30889: Gained carrier Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.312 [INFO][4656] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0 coredns-76f75df574- kube-system 7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a 760 0 2024-06-25 18:44:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-d50f1c7422 coredns-76f75df574-pmk7s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6252da30889 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Namespace="kube-system" Pod="coredns-76f75df574-pmk7s" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.313 [INFO][4656] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Namespace="kube-system" Pod="coredns-76f75df574-pmk7s" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.354 [INFO][4677] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" HandleID="k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.368 [INFO][4677] ipam_plugin.go 264: Auto assigning IP ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" HandleID="k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290220), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-d50f1c7422", "pod":"coredns-76f75df574-pmk7s", "timestamp":"2024-06-25 18:45:28.354442923 +0000 UTC"}, Hostname:"ci-4012.0.0-a-d50f1c7422", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.368 [INFO][4677] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.368 [INFO][4677] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.368 [INFO][4677] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-d50f1c7422' Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.373 [INFO][4677] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.438 [INFO][4677] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.445 [INFO][4677] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.447 [INFO][4677] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.450 [INFO][4677] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.450 [INFO][4677] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.452 [INFO][4677] ipam.go 1685: Creating new handle: k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005 Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.456 [INFO][4677] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.463 [INFO][4677] ipam.go 1216: Successfully claimed IPs: [192.168.40.3/26] block=192.168.40.0/26 handle="k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.463 [INFO][4677] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.3/26] handle="k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.463 [INFO][4677] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:28.491340 containerd[1716]: 2024-06-25 18:45:28.463 [INFO][4677] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.40.3/26] IPv6=[] ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" HandleID="k8s-pod-network.5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.494241 containerd[1716]: 2024-06-25 18:45:28.465 [INFO][4656] k8s.go 386: Populated endpoint ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Namespace="kube-system" Pod="coredns-76f75df574-pmk7s" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"", Pod:"coredns-76f75df574-pmk7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6252da30889", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:28.494241 containerd[1716]: 2024-06-25 18:45:28.465 [INFO][4656] k8s.go 387: Calico CNI using IPs: [192.168.40.3/32] ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Namespace="kube-system" Pod="coredns-76f75df574-pmk7s" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.494241 containerd[1716]: 2024-06-25 18:45:28.466 [INFO][4656] dataplane_linux.go 68: Setting the host side veth name to cali6252da30889 ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Namespace="kube-system" Pod="coredns-76f75df574-pmk7s" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.494241 containerd[1716]: 2024-06-25 18:45:28.472 [INFO][4656] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Namespace="kube-system" Pod="coredns-76f75df574-pmk7s" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.494241 containerd[1716]: 2024-06-25 18:45:28.474 [INFO][4656] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Namespace="kube-system" Pod="coredns-76f75df574-pmk7s" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005", Pod:"coredns-76f75df574-pmk7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6252da30889", MAC:"a2:a2:cf:07:85:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:28.494241 containerd[1716]: 2024-06-25 18:45:28.488 [INFO][4656] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005" Namespace="kube-system" Pod="coredns-76f75df574-pmk7s" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:28.533782 containerd[1716]: time="2024-06-25T18:45:28.533173763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:28.533782 containerd[1716]: time="2024-06-25T18:45:28.533241264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:28.533782 containerd[1716]: time="2024-06-25T18:45:28.533279464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:28.533782 containerd[1716]: time="2024-06-25T18:45:28.533297264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:28.538119 containerd[1716]: time="2024-06-25T18:45:28.537992567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d848d5b8b-xwq6w,Uid:ac43ad38-8dbb-4f6c-a78e-0a97e02beaac,Namespace:calico-system,Attempt:1,} returns sandbox id \"3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc\"" Jun 25 18:45:28.558455 systemd[1]: Started cri-containerd-5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005.scope - libcontainer container 5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005. Jun 25 18:45:28.598845 containerd[1716]: time="2024-06-25T18:45:28.598791815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pmk7s,Uid:7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a,Namespace:kube-system,Attempt:1,} returns sandbox id \"5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005\"" Jun 25 18:45:28.602105 containerd[1716]: time="2024-06-25T18:45:28.601931018Z" level=info msg="CreateContainer within sandbox \"5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:28.634670 containerd[1716]: time="2024-06-25T18:45:28.634565843Z" level=info msg="CreateContainer within sandbox \"5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"390e51dd6bfb0233711cdd7c7698b35646be796c9f8a46921a16c4bf0a522fec\"" Jun 25 18:45:28.639303 containerd[1716]: time="2024-06-25T18:45:28.637415346Z" level=info msg="StartContainer for \"390e51dd6bfb0233711cdd7c7698b35646be796c9f8a46921a16c4bf0a522fec\"" Jun 25 18:45:28.648463 systemd[1]: run-netns-cni\x2dd1ac178a\x2d325a\x2d01d3\x2d5e9b\x2d0ce8ffedde48.mount: Deactivated successfully. Jun 25 18:45:28.682515 systemd[1]: Started cri-containerd-390e51dd6bfb0233711cdd7c7698b35646be796c9f8a46921a16c4bf0a522fec.scope - libcontainer container 390e51dd6bfb0233711cdd7c7698b35646be796c9f8a46921a16c4bf0a522fec. Jun 25 18:45:28.709517 containerd[1716]: time="2024-06-25T18:45:28.709472902Z" level=info msg="StartContainer for \"390e51dd6bfb0233711cdd7c7698b35646be796c9f8a46921a16c4bf0a522fec\" returns successfully" Jun 25 18:45:29.067822 containerd[1716]: time="2024-06-25T18:45:29.067778584Z" level=info msg="StopPodSandbox for \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\"" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.107 [INFO][4845] k8s.go 608: Cleaning up netns ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.107 [INFO][4845] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" iface="eth0" netns="/var/run/netns/cni-6129613d-01a5-5b8a-5132-3fefd64e1e96" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.108 [INFO][4845] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" iface="eth0" netns="/var/run/netns/cni-6129613d-01a5-5b8a-5132-3fefd64e1e96" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.108 [INFO][4845] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" iface="eth0" netns="/var/run/netns/cni-6129613d-01a5-5b8a-5132-3fefd64e1e96" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.108 [INFO][4845] k8s.go 615: Releasing IP address(es) ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.109 [INFO][4845] utils.go 188: Calico CNI releasing IP address ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.129 [INFO][4852] ipam_plugin.go 411: Releasing address using handleID ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.129 [INFO][4852] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.129 [INFO][4852] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.136 [WARNING][4852] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.137 [INFO][4852] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.138 [INFO][4852] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:29.141728 containerd[1716]: 2024-06-25 18:45:29.139 [INFO][4845] k8s.go 621: Teardown processing complete. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:29.144803 containerd[1716]: time="2024-06-25T18:45:29.144763645Z" level=info msg="TearDown network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\" successfully" Jun 25 18:45:29.144803 containerd[1716]: time="2024-06-25T18:45:29.144802845Z" level=info msg="StopPodSandbox for \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\" returns successfully" Jun 25 18:45:29.147537 systemd[1]: run-netns-cni\x2d6129613d\x2d01a5\x2d5b8a\x2d5132\x2d3fefd64e1e96.mount: Deactivated successfully. Jun 25 18:45:29.149107 containerd[1716]: time="2024-06-25T18:45:29.149061748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5rm6,Uid:6b78a687-8f7f-4336-ad30-99e97b06a846,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:29.591636 kubelet[3241]: I0625 18:45:29.591063 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pmk7s" podStartSLOduration=47.591009596 podStartE2EDuration="47.591009596s" podCreationTimestamp="2024-06-25 18:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:29.548860663 +0000 UTC m=+59.576068970" watchObservedRunningTime="2024-06-25 18:45:29.591009596 +0000 UTC m=+59.618217903" Jun 25 18:45:29.600092 systemd-networkd[1359]: caliec811c4f579: Link UP Jun 25 18:45:29.601507 systemd-networkd[1359]: caliec811c4f579: Gained carrier Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.214 [INFO][4859] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0 coredns-76f75df574- kube-system 6b78a687-8f7f-4336-ad30-99e97b06a846 773 0 2024-06-25 18:44:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-d50f1c7422 coredns-76f75df574-f5rm6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliec811c4f579 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Namespace="kube-system" Pod="coredns-76f75df574-f5rm6" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.215 [INFO][4859] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Namespace="kube-system" Pod="coredns-76f75df574-f5rm6" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.248 [INFO][4869] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" HandleID="k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.256 [INFO][4869] ipam_plugin.go 264: Auto assigning IP ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" HandleID="k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267de0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-d50f1c7422", "pod":"coredns-76f75df574-f5rm6", "timestamp":"2024-06-25 18:45:29.248755127 +0000 UTC"}, Hostname:"ci-4012.0.0-a-d50f1c7422", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.256 [INFO][4869] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.256 [INFO][4869] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.256 [INFO][4869] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-d50f1c7422' Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.258 [INFO][4869] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.262 [INFO][4869] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.530 [INFO][4869] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.536 [INFO][4869] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.543 [INFO][4869] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.543 [INFO][4869] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.545 [INFO][4869] ipam.go 1685: Creating new handle: k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4 Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.552 [INFO][4869] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.592 [INFO][4869] ipam.go 1216: Successfully claimed IPs: [192.168.40.4/26] block=192.168.40.0/26 handle="k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.592 [INFO][4869] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.4/26] handle="k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.592 [INFO][4869] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:29.691344 containerd[1716]: 2024-06-25 18:45:29.592 [INFO][4869] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.40.4/26] IPv6=[] ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" HandleID="k8s-pod-network.34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.693502 containerd[1716]: 2024-06-25 18:45:29.594 [INFO][4859] k8s.go 386: Populated endpoint ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Namespace="kube-system" Pod="coredns-76f75df574-f5rm6" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6b78a687-8f7f-4336-ad30-99e97b06a846", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"", Pod:"coredns-76f75df574-f5rm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec811c4f579", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.693502 containerd[1716]: 2024-06-25 18:45:29.594 [INFO][4859] k8s.go 387: Calico CNI using IPs: [192.168.40.4/32] ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Namespace="kube-system" Pod="coredns-76f75df574-f5rm6" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.693502 containerd[1716]: 2024-06-25 18:45:29.594 [INFO][4859] dataplane_linux.go 68: Setting the host side veth name to caliec811c4f579 ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Namespace="kube-system" Pod="coredns-76f75df574-f5rm6" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.693502 containerd[1716]: 2024-06-25 18:45:29.599 [INFO][4859] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Namespace="kube-system" Pod="coredns-76f75df574-f5rm6" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.693502 containerd[1716]: 2024-06-25 18:45:29.599 [INFO][4859] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Namespace="kube-system" Pod="coredns-76f75df574-f5rm6" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6b78a687-8f7f-4336-ad30-99e97b06a846", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4", Pod:"coredns-76f75df574-f5rm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec811c4f579", MAC:"5a:03:50:6b:b2:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.693502 containerd[1716]: 2024-06-25 18:45:29.686 [INFO][4859] k8s.go 500: Wrote updated endpoint to datastore ContainerID="34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4" Namespace="kube-system" Pod="coredns-76f75df574-f5rm6" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:29.743039 containerd[1716]: time="2024-06-25T18:45:29.742304915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:29.743039 containerd[1716]: time="2024-06-25T18:45:29.742405915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.746032 containerd[1716]: time="2024-06-25T18:45:29.745043518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:29.746032 containerd[1716]: time="2024-06-25T18:45:29.745073118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.783902 systemd[1]: Started cri-containerd-34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4.scope - libcontainer container 34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4. Jun 25 18:45:29.842540 containerd[1716]: time="2024-06-25T18:45:29.842328094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f5rm6,Uid:6b78a687-8f7f-4336-ad30-99e97b06a846,Namespace:kube-system,Attempt:1,} returns sandbox id \"34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4\"" Jun 25 18:45:29.848620 containerd[1716]: time="2024-06-25T18:45:29.848580699Z" level=info msg="CreateContainer within sandbox \"34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:29.891610 containerd[1716]: time="2024-06-25T18:45:29.891529533Z" level=info msg="CreateContainer within sandbox \"34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a04ee5ed5121520b049c8988b4a78679503a7a1103bb2c4c7705990312fb3c13\"" Jun 25 18:45:29.892163 containerd[1716]: time="2024-06-25T18:45:29.891925733Z" level=info msg="StartContainer for \"a04ee5ed5121520b049c8988b4a78679503a7a1103bb2c4c7705990312fb3c13\"" Jun 25 18:45:29.927447 systemd[1]: Started cri-containerd-a04ee5ed5121520b049c8988b4a78679503a7a1103bb2c4c7705990312fb3c13.scope - libcontainer container a04ee5ed5121520b049c8988b4a78679503a7a1103bb2c4c7705990312fb3c13. Jun 25 18:45:30.058449 containerd[1716]: time="2024-06-25T18:45:30.058358464Z" level=info msg="StartContainer for \"a04ee5ed5121520b049c8988b4a78679503a7a1103bb2c4c7705990312fb3c13\" returns successfully" Jun 25 18:45:30.060700 containerd[1716]: time="2024-06-25T18:45:30.060239966Z" level=info msg="StopPodSandbox for \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\"" Jun 25 18:45:30.072403 systemd-networkd[1359]: cali057f7027ce6: Gained IPv6LL Jun 25 18:45:30.120781 containerd[1716]: time="2024-06-25T18:45:30.119734213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:30.135115 containerd[1716]: time="2024-06-25T18:45:30.135047025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:45:30.136827 systemd-networkd[1359]: cali6252da30889: Gained IPv6LL Jun 25 18:45:30.143579 containerd[1716]: time="2024-06-25T18:45:30.141941230Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.100 [WARNING][4989] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6b78a687-8f7f-4336-ad30-99e97b06a846", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4", Pod:"coredns-76f75df574-f5rm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec811c4f579", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.101 [INFO][4989] k8s.go 608: Cleaning up netns ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.101 [INFO][4989] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" iface="eth0" netns="" Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.101 [INFO][4989] k8s.go 615: Releasing IP address(es) ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.101 [INFO][4989] utils.go 188: Calico CNI releasing IP address ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.126 [INFO][4997] ipam_plugin.go 411: Releasing address using handleID ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.126 [INFO][4997] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.126 [INFO][4997] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.136 [WARNING][4997] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.138 [INFO][4997] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.141 [INFO][4997] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:30.145297 containerd[1716]: 2024-06-25 18:45:30.144 [INFO][4989] k8s.go 621: Teardown processing complete. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:30.146061 containerd[1716]: time="2024-06-25T18:45:30.145373433Z" level=info msg="TearDown network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\" successfully" Jun 25 18:45:30.146061 containerd[1716]: time="2024-06-25T18:45:30.145411533Z" level=info msg="StopPodSandbox for \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\" returns successfully" Jun 25 18:45:30.146061 containerd[1716]: time="2024-06-25T18:45:30.145964733Z" level=info msg="RemovePodSandbox for \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\"" Jun 25 18:45:30.146061 containerd[1716]: time="2024-06-25T18:45:30.146002133Z" level=info msg="Forcibly stopping sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\"" Jun 25 18:45:30.147368 containerd[1716]: time="2024-06-25T18:45:30.147255734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:30.149113 containerd[1716]: time="2024-06-25T18:45:30.148982236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.440955722s" Jun 25 18:45:30.149113 containerd[1716]: time="2024-06-25T18:45:30.149038036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:45:30.151659 containerd[1716]: time="2024-06-25T18:45:30.151510938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:45:30.153726 containerd[1716]: time="2024-06-25T18:45:30.153625139Z" level=info msg="CreateContainer within sandbox \"02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:45:30.195360 containerd[1716]: time="2024-06-25T18:45:30.195310172Z" level=info msg="CreateContainer within sandbox \"02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c4ba6406d49ad99097b3e16b5e5823d8b9b12bc827a6674ca9d6a808cdd7e165\"" Jun 25 18:45:30.198302 containerd[1716]: time="2024-06-25T18:45:30.197320674Z" level=info msg="StartContainer for \"c4ba6406d49ad99097b3e16b5e5823d8b9b12bc827a6674ca9d6a808cdd7e165\"" Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.189 [WARNING][5016] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6b78a687-8f7f-4336-ad30-99e97b06a846", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"34189e9fdc56c9586b936b67d1c005f0edf99fb30318663873b29ecae43476d4", Pod:"coredns-76f75df574-f5rm6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec811c4f579", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.190 [INFO][5016] k8s.go 608: Cleaning up netns ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.190 [INFO][5016] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" iface="eth0" netns="" Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.190 [INFO][5016] k8s.go 615: Releasing IP address(es) ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.190 [INFO][5016] utils.go 188: Calico CNI releasing IP address ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.218 [INFO][5024] ipam_plugin.go 411: Releasing address using handleID ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.219 [INFO][5024] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.219 [INFO][5024] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.227 [WARNING][5024] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.228 [INFO][5024] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" HandleID="k8s-pod-network.6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--f5rm6-eth0" Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.229 [INFO][5024] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:30.233787 containerd[1716]: 2024-06-25 18:45:30.232 [INFO][5016] k8s.go 621: Teardown processing complete. ContainerID="6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112" Jun 25 18:45:30.234565 containerd[1716]: time="2024-06-25T18:45:30.233823102Z" level=info msg="TearDown network for sandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\" successfully" Jun 25 18:45:30.236508 systemd[1]: Started cri-containerd-c4ba6406d49ad99097b3e16b5e5823d8b9b12bc827a6674ca9d6a808cdd7e165.scope - libcontainer container c4ba6406d49ad99097b3e16b5e5823d8b9b12bc827a6674ca9d6a808cdd7e165. Jun 25 18:45:30.245168 containerd[1716]: time="2024-06-25T18:45:30.244947411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:30.245168 containerd[1716]: time="2024-06-25T18:45:30.245041211Z" level=info msg="RemovePodSandbox \"6e246b1e934892bb246e2935044aa85aba0d466f2042db2d7f5e23e743061112\" returns successfully" Jun 25 18:45:30.245740 containerd[1716]: time="2024-06-25T18:45:30.245660812Z" level=info msg="StopPodSandbox for \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\"" Jun 25 18:45:30.276340 containerd[1716]: time="2024-06-25T18:45:30.276181736Z" level=info msg="StartContainer for \"c4ba6406d49ad99097b3e16b5e5823d8b9b12bc827a6674ca9d6a808cdd7e165\" returns successfully" Jun 25 18:45:30.334829 kubelet[3241]: I0625 18:45:30.334795 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-42k4m" podStartSLOduration=36.757435312 podStartE2EDuration="41.334748282s" podCreationTimestamp="2024-06-25 18:44:49 +0000 UTC" firstStartedPulling="2024-06-25 18:45:25.572391466 +0000 UTC m=+55.599599773" lastFinishedPulling="2024-06-25 18:45:30.149704436 +0000 UTC m=+60.176912743" observedRunningTime="2024-06-25 18:45:30.333515981 +0000 UTC m=+60.360724388" watchObservedRunningTime="2024-06-25 18:45:30.334748282 +0000 UTC m=+60.361956689" Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.294 [WARNING][5066] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"459a93ef-d13a-41c8-9d3f-50f9914d6a1a", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b", Pod:"csi-node-driver-42k4m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0c71902af10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.294 [INFO][5066] k8s.go 608: Cleaning up netns ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.294 [INFO][5066] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" iface="eth0" netns="" Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.294 [INFO][5066] k8s.go 615: Releasing IP address(es) ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.294 [INFO][5066] utils.go 188: Calico CNI releasing IP address ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.322 [INFO][5085] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.323 [INFO][5085] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.323 [INFO][5085] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.338 [WARNING][5085] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.338 [INFO][5085] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.445 [INFO][5085] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:30.453422 containerd[1716]: 2024-06-25 18:45:30.450 [INFO][5066] k8s.go 621: Teardown processing complete. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:30.454162 containerd[1716]: time="2024-06-25T18:45:30.454108976Z" level=info msg="TearDown network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\" successfully" Jun 25 18:45:30.454162 containerd[1716]: time="2024-06-25T18:45:30.454159676Z" level=info msg="StopPodSandbox for \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\" returns successfully" Jun 25 18:45:30.455021 containerd[1716]: time="2024-06-25T18:45:30.454796676Z" level=info msg="RemovePodSandbox for \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\"" Jun 25 18:45:30.455021 containerd[1716]: time="2024-06-25T18:45:30.454844676Z" level=info msg="Forcibly stopping sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\"" Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.488 [WARNING][5103] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"459a93ef-d13a-41c8-9d3f-50f9914d6a1a", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"02f7952e21ba981de262d10938985466bd2070f8ea99e6edc4ee16ad4b0db26b", Pod:"csi-node-driver-42k4m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0c71902af10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.488 [INFO][5103] k8s.go 608: Cleaning up netns ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.488 [INFO][5103] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" iface="eth0" netns="" Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.488 [INFO][5103] k8s.go 615: Releasing IP address(es) ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.488 [INFO][5103] utils.go 188: Calico CNI releasing IP address ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.507 [INFO][5109] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.507 [INFO][5109] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.507 [INFO][5109] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.512 [WARNING][5109] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.512 [INFO][5109] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" HandleID="k8s-pod-network.4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Workload="ci--4012.0.0--a--d50f1c7422-k8s-csi--node--driver--42k4m-eth0" Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.514 [INFO][5109] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:30.516044 containerd[1716]: 2024-06-25 18:45:30.514 [INFO][5103] k8s.go 621: Teardown processing complete. ContainerID="4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20" Jun 25 18:45:30.516691 containerd[1716]: time="2024-06-25T18:45:30.516058725Z" level=info msg="TearDown network for sandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\" successfully" Jun 25 18:45:30.526420 containerd[1716]: time="2024-06-25T18:45:30.526361333Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:30.526588 containerd[1716]: time="2024-06-25T18:45:30.526435333Z" level=info msg="RemovePodSandbox \"4e32ee1881f25feed82a359f6681f9d1c8de0b66c3c63a6f38a76211f0950b20\" returns successfully" Jun 25 18:45:30.527001 containerd[1716]: time="2024-06-25T18:45:30.526975033Z" level=info msg="StopPodSandbox for \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\"" Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.591 [WARNING][5127] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005", Pod:"coredns-76f75df574-pmk7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6252da30889", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.591 [INFO][5127] k8s.go 608: Cleaning up netns ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.591 [INFO][5127] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" iface="eth0" netns="" Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.591 [INFO][5127] k8s.go 615: Releasing IP address(es) ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.591 [INFO][5127] utils.go 188: Calico CNI releasing IP address ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.615 [INFO][5133] ipam_plugin.go 411: Releasing address using handleID ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.615 [INFO][5133] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.615 [INFO][5133] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.624 [WARNING][5133] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.624 [INFO][5133] ipam_plugin.go 439: Releasing address using workloadID ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.626 [INFO][5133] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:30.630156 containerd[1716]: 2024-06-25 18:45:30.628 [INFO][5127] k8s.go 621: Teardown processing complete. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:30.631352 containerd[1716]: time="2024-06-25T18:45:30.630118614Z" level=info msg="TearDown network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\" successfully" Jun 25 18:45:30.631352 containerd[1716]: time="2024-06-25T18:45:30.630257914Z" level=info msg="StopPodSandbox for \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\" returns successfully" Jun 25 18:45:30.631352 containerd[1716]: time="2024-06-25T18:45:30.631256015Z" level=info msg="RemovePodSandbox for \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\"" Jun 25 18:45:30.631352 containerd[1716]: time="2024-06-25T18:45:30.631309415Z" level=info msg="Forcibly stopping sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\"" Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.679 [WARNING][5157] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7f1af524-d05f-4a7f-a4e5-c3f8caa1da7a", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"5a18dc1d72f15e8d252f698eb9ce202a2d708ed1896ceba4b723b66262ab4005", Pod:"coredns-76f75df574-pmk7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6252da30889", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.680 [INFO][5157] k8s.go 608: Cleaning up netns ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.680 [INFO][5157] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" iface="eth0" netns="" Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.680 [INFO][5157] k8s.go 615: Releasing IP address(es) ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.680 [INFO][5157] utils.go 188: Calico CNI releasing IP address ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.698 [INFO][5163] ipam_plugin.go 411: Releasing address using handleID ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.698 [INFO][5163] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.698 [INFO][5163] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.703 [WARNING][5163] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.703 [INFO][5163] ipam_plugin.go 439: Releasing address using workloadID ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" HandleID="k8s-pod-network.26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Workload="ci--4012.0.0--a--d50f1c7422-k8s-coredns--76f75df574--pmk7s-eth0" Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.704 [INFO][5163] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:30.706734 containerd[1716]: 2024-06-25 18:45:30.705 [INFO][5157] k8s.go 621: Teardown processing complete. ContainerID="26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b" Jun 25 18:45:30.706734 containerd[1716]: time="2024-06-25T18:45:30.706639075Z" level=info msg="TearDown network for sandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\" successfully" Jun 25 18:45:30.714891 containerd[1716]: time="2024-06-25T18:45:30.714825481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:30.715167 containerd[1716]: time="2024-06-25T18:45:30.714896981Z" level=info msg="RemovePodSandbox \"26ed5e3b4493a5f77287150c08c7fe8e68f0d662d0c889ec8326cde7dad81e9b\" returns successfully" Jun 25 18:45:30.715482 containerd[1716]: time="2024-06-25T18:45:30.715457082Z" level=info msg="StopPodSandbox for \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\"" Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.750 [WARNING][5181] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0", GenerateName:"calico-kube-controllers-5d848d5b8b-", Namespace:"calico-system", SelfLink:"", UID:"ac43ad38-8dbb-4f6c-a78e-0a97e02beaac", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d848d5b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc", Pod:"calico-kube-controllers-5d848d5b8b-xwq6w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali057f7027ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.750 [INFO][5181] k8s.go 608: Cleaning up netns ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.750 [INFO][5181] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" iface="eth0" netns="" Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.750 [INFO][5181] k8s.go 615: Releasing IP address(es) ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.750 [INFO][5181] utils.go 188: Calico CNI releasing IP address ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.768 [INFO][5187] ipam_plugin.go 411: Releasing address using handleID ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.769 [INFO][5187] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.769 [INFO][5187] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.774 [WARNING][5187] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.774 [INFO][5187] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.775 [INFO][5187] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:30.777463 containerd[1716]: 2024-06-25 18:45:30.776 [INFO][5181] k8s.go 621: Teardown processing complete. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:30.778130 containerd[1716]: time="2024-06-25T18:45:30.777489330Z" level=info msg="TearDown network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\" successfully" Jun 25 18:45:30.778130 containerd[1716]: time="2024-06-25T18:45:30.777521630Z" level=info msg="StopPodSandbox for \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\" returns successfully" Jun 25 18:45:30.778130 containerd[1716]: time="2024-06-25T18:45:30.778040531Z" level=info msg="RemovePodSandbox for \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\"" Jun 25 18:45:30.778130 containerd[1716]: time="2024-06-25T18:45:30.778073631Z" level=info msg="Forcibly stopping sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\"" Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.807 [WARNING][5205] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0", GenerateName:"calico-kube-controllers-5d848d5b8b-", Namespace:"calico-system", SelfLink:"", UID:"ac43ad38-8dbb-4f6c-a78e-0a97e02beaac", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d848d5b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc", Pod:"calico-kube-controllers-5d848d5b8b-xwq6w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali057f7027ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.807 [INFO][5205] k8s.go 608: Cleaning up netns ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.807 [INFO][5205] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" iface="eth0" netns="" Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.807 [INFO][5205] k8s.go 615: Releasing IP address(es) ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.807 [INFO][5205] utils.go 188: Calico CNI releasing IP address ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.828 [INFO][5211] ipam_plugin.go 411: Releasing address using handleID ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.828 [INFO][5211] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.828 [INFO][5211] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.833 [WARNING][5211] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.833 [INFO][5211] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" HandleID="k8s-pod-network.bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--kube--controllers--5d848d5b8b--xwq6w-eth0" Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.835 [INFO][5211] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:30.837564 containerd[1716]: 2024-06-25 18:45:30.836 [INFO][5205] k8s.go 621: Teardown processing complete. ContainerID="bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008" Jun 25 18:45:30.838204 containerd[1716]: time="2024-06-25T18:45:30.837595778Z" level=info msg="TearDown network for sandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\" successfully" Jun 25 18:45:30.845692 containerd[1716]: time="2024-06-25T18:45:30.845574584Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:30.846131 containerd[1716]: time="2024-06-25T18:45:30.845727784Z" level=info msg="RemovePodSandbox \"bda4dbc33bc269213859e3b26502f1a4d994ce8b492814fef8df0a76d5cbc008\" returns successfully" Jun 25 18:45:30.884455 kubelet[3241]: I0625 18:45:30.884407 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f5rm6" podStartSLOduration=48.884341915 podStartE2EDuration="48.884341915s" podCreationTimestamp="2024-06-25 18:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:30.448873772 +0000 UTC m=+60.476082079" watchObservedRunningTime="2024-06-25 18:45:30.884341915 +0000 UTC m=+60.911550322" Jun 25 18:45:31.160434 systemd-networkd[1359]: caliec811c4f579: Gained IPv6LL Jun 25 18:45:31.169148 kubelet[3241]: I0625 18:45:31.169113 3241 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:45:31.169148 kubelet[3241]: I0625 18:45:31.169154 3241 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:45:33.072143 containerd[1716]: time="2024-06-25T18:45:33.072092237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:33.074527 containerd[1716]: time="2024-06-25T18:45:33.074459539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:45:33.078401 containerd[1716]: time="2024-06-25T18:45:33.078304342Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:33.081693 containerd[1716]: time="2024-06-25T18:45:33.081641644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:33.082710 containerd[1716]: time="2024-06-25T18:45:33.082262945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.930715007s" Jun 25 18:45:33.082710 containerd[1716]: time="2024-06-25T18:45:33.082322845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:45:33.096693 containerd[1716]: time="2024-06-25T18:45:33.096657256Z" level=info msg="CreateContainer within sandbox \"3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:45:33.132034 containerd[1716]: time="2024-06-25T18:45:33.131990684Z" level=info msg="CreateContainer within sandbox \"3b7da14a3bf751928abc9184275ca3fffbf7e34cf34fa304d82b761af4ede3bc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f\"" Jun 25 18:45:33.133493 containerd[1716]: time="2024-06-25T18:45:33.132476085Z" level=info msg="StartContainer for \"d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f\"" Jun 25 18:45:33.162732 systemd[1]: Started cri-containerd-d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f.scope - libcontainer container d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f. Jun 25 18:45:33.207577 containerd[1716]: time="2024-06-25T18:45:33.207426544Z" level=info msg="StartContainer for \"d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f\" returns successfully" Jun 25 18:45:33.359057 kubelet[3241]: I0625 18:45:33.358077 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d848d5b8b-xwq6w" podStartSLOduration=39.816562687 podStartE2EDuration="44.358022362s" podCreationTimestamp="2024-06-25 18:44:49 +0000 UTC" firstStartedPulling="2024-06-25 18:45:28.54106467 +0000 UTC m=+58.568272977" lastFinishedPulling="2024-06-25 18:45:33.082524345 +0000 UTC m=+63.109732652" observedRunningTime="2024-06-25 18:45:33.356060861 +0000 UTC m=+63.383269268" watchObservedRunningTime="2024-06-25 18:45:33.358022362 +0000 UTC m=+63.385230669" Jun 25 18:45:39.279642 systemd[1]: run-containerd-runc-k8s.io-a60dacbcb85b605e440e11f5c3a0be4245363cdb71ccaa9fa1b6f61ea5e06f85-runc.cmuxZb.mount: Deactivated successfully. Jun 25 18:45:44.050943 kubelet[3241]: I0625 18:45:44.049934 3241 topology_manager.go:215] "Topology Admit Handler" podUID="a86d8fac-629d-4b2f-83ea-093bfce5370b" podNamespace="calico-apiserver" podName="calico-apiserver-cd49fc7df-5d7qk" Jun 25 18:45:44.064326 systemd[1]: Created slice kubepods-besteffort-poda86d8fac_629d_4b2f_83ea_093bfce5370b.slice - libcontainer container kubepods-besteffort-poda86d8fac_629d_4b2f_83ea_093bfce5370b.slice. Jun 25 18:45:44.077174 kubelet[3241]: I0625 18:45:44.076726 3241 topology_manager.go:215] "Topology Admit Handler" podUID="d4372920-7239-4477-97bf-f1f8e2341b7a" podNamespace="calico-apiserver" podName="calico-apiserver-cd49fc7df-768qp" Jun 25 18:45:44.086077 systemd[1]: Created slice kubepods-besteffort-podd4372920_7239_4477_97bf_f1f8e2341b7a.slice - libcontainer container kubepods-besteffort-podd4372920_7239_4477_97bf_f1f8e2341b7a.slice. Jun 25 18:45:44.243808 kubelet[3241]: I0625 18:45:44.243737 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a86d8fac-629d-4b2f-83ea-093bfce5370b-calico-apiserver-certs\") pod \"calico-apiserver-cd49fc7df-5d7qk\" (UID: \"a86d8fac-629d-4b2f-83ea-093bfce5370b\") " pod="calico-apiserver/calico-apiserver-cd49fc7df-5d7qk" Jun 25 18:45:44.244129 kubelet[3241]: I0625 18:45:44.244007 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4372920-7239-4477-97bf-f1f8e2341b7a-calico-apiserver-certs\") pod \"calico-apiserver-cd49fc7df-768qp\" (UID: \"d4372920-7239-4477-97bf-f1f8e2341b7a\") " pod="calico-apiserver/calico-apiserver-cd49fc7df-768qp" Jun 25 18:45:44.244129 kubelet[3241]: I0625 18:45:44.244103 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns8kf\" (UniqueName: \"kubernetes.io/projected/d4372920-7239-4477-97bf-f1f8e2341b7a-kube-api-access-ns8kf\") pod \"calico-apiserver-cd49fc7df-768qp\" (UID: \"d4372920-7239-4477-97bf-f1f8e2341b7a\") " pod="calico-apiserver/calico-apiserver-cd49fc7df-768qp" Jun 25 18:45:44.244315 kubelet[3241]: I0625 18:45:44.244171 3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj4jz\" (UniqueName: \"kubernetes.io/projected/a86d8fac-629d-4b2f-83ea-093bfce5370b-kube-api-access-dj4jz\") pod \"calico-apiserver-cd49fc7df-5d7qk\" (UID: \"a86d8fac-629d-4b2f-83ea-093bfce5370b\") " pod="calico-apiserver/calico-apiserver-cd49fc7df-5d7qk" Jun 25 18:45:44.345675 kubelet[3241]: E0625 18:45:44.345141 3241 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:45:44.345675 kubelet[3241]: E0625 18:45:44.345238 3241 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86d8fac-629d-4b2f-83ea-093bfce5370b-calico-apiserver-certs podName:a86d8fac-629d-4b2f-83ea-093bfce5370b nodeName:}" failed. No retries permitted until 2024-06-25 18:45:44.845207264 +0000 UTC m=+74.872415571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a86d8fac-629d-4b2f-83ea-093bfce5370b-calico-apiserver-certs") pod "calico-apiserver-cd49fc7df-5d7qk" (UID: "a86d8fac-629d-4b2f-83ea-093bfce5370b") : secret "calico-apiserver-certs" not found Jun 25 18:45:44.345675 kubelet[3241]: E0625 18:45:44.345442 3241 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:45:44.345675 kubelet[3241]: E0625 18:45:44.345508 3241 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4372920-7239-4477-97bf-f1f8e2341b7a-calico-apiserver-certs podName:d4372920-7239-4477-97bf-f1f8e2341b7a nodeName:}" failed. No retries permitted until 2024-06-25 18:45:44.845492464 +0000 UTC m=+74.872700771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d4372920-7239-4477-97bf-f1f8e2341b7a-calico-apiserver-certs") pod "calico-apiserver-cd49fc7df-768qp" (UID: "d4372920-7239-4477-97bf-f1f8e2341b7a") : secret "calico-apiserver-certs" not found Jun 25 18:45:44.975238 containerd[1716]: time="2024-06-25T18:45:44.975155631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd49fc7df-5d7qk,Uid:a86d8fac-629d-4b2f-83ea-093bfce5370b,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:45:44.990941 containerd[1716]: time="2024-06-25T18:45:44.990887835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd49fc7df-768qp,Uid:d4372920-7239-4477-97bf-f1f8e2341b7a,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:45:45.161763 systemd-networkd[1359]: cali9ddc22ac9a4: Link UP Jun 25 18:45:45.164302 systemd-networkd[1359]: cali9ddc22ac9a4: Gained carrier Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.071 [INFO][5371] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0 calico-apiserver-cd49fc7df- calico-apiserver a86d8fac-629d-4b2f-83ea-093bfce5370b 884 0 2024-06-25 18:45:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cd49fc7df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-d50f1c7422 calico-apiserver-cd49fc7df-5d7qk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ddc22ac9a4 [] []}} ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-5d7qk" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.071 [INFO][5371] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-5d7qk" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.110 [INFO][5390] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" HandleID="k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.122 [INFO][5390] ipam_plugin.go 264: Auto assigning IP ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" HandleID="k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-d50f1c7422", "pod":"calico-apiserver-cd49fc7df-5d7qk", "timestamp":"2024-06-25 18:45:45.110820166 +0000 UTC"}, Hostname:"ci-4012.0.0-a-d50f1c7422", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.123 [INFO][5390] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.123 [INFO][5390] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.123 [INFO][5390] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-d50f1c7422' Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.125 [INFO][5390] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.130 [INFO][5390] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.134 [INFO][5390] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.136 [INFO][5390] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.139 [INFO][5390] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.140 [INFO][5390] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.141 [INFO][5390] ipam.go 1685: Creating new handle: k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222 Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.145 [INFO][5390] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.153 [INFO][5390] ipam.go 1216: Successfully claimed IPs: [192.168.40.5/26] block=192.168.40.0/26 handle="k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.153 [INFO][5390] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.5/26] handle="k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.153 [INFO][5390] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:45.185199 containerd[1716]: 2024-06-25 18:45:45.153 [INFO][5390] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.40.5/26] IPv6=[] ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" HandleID="k8s-pod-network.9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" Jun 25 18:45:45.188403 containerd[1716]: 2024-06-25 18:45:45.156 [INFO][5371] k8s.go 386: Populated endpoint ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-5d7qk" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0", GenerateName:"calico-apiserver-cd49fc7df-", Namespace:"calico-apiserver", SelfLink:"", UID:"a86d8fac-629d-4b2f-83ea-093bfce5370b", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd49fc7df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"", Pod:"calico-apiserver-cd49fc7df-5d7qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ddc22ac9a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:45.188403 containerd[1716]: 2024-06-25 18:45:45.157 [INFO][5371] k8s.go 387: Calico CNI using IPs: [192.168.40.5/32] ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-5d7qk" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" Jun 25 18:45:45.188403 containerd[1716]: 2024-06-25 18:45:45.157 [INFO][5371] dataplane_linux.go 68: Setting the host side veth name to cali9ddc22ac9a4 ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-5d7qk" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" Jun 25 18:45:45.188403 containerd[1716]: 2024-06-25 18:45:45.163 [INFO][5371] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-5d7qk" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" Jun 25 18:45:45.188403 containerd[1716]: 2024-06-25 18:45:45.166 [INFO][5371] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-5d7qk" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0", GenerateName:"calico-apiserver-cd49fc7df-", Namespace:"calico-apiserver", SelfLink:"", UID:"a86d8fac-629d-4b2f-83ea-093bfce5370b", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd49fc7df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222", Pod:"calico-apiserver-cd49fc7df-5d7qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ddc22ac9a4", MAC:"a2:aa:53:e7:db:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:45.188403 containerd[1716]: 2024-06-25 18:45:45.182 [INFO][5371] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-5d7qk" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--5d7qk-eth0" Jun 25 18:45:45.218910 systemd-networkd[1359]: cali95dd4e7b349: Link UP Jun 25 18:45:45.219185 systemd-networkd[1359]: cali95dd4e7b349: Gained carrier Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.095 [INFO][5380] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0 calico-apiserver-cd49fc7df- calico-apiserver d4372920-7239-4477-97bf-f1f8e2341b7a 890 0 2024-06-25 18:45:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cd49fc7df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-d50f1c7422 calico-apiserver-cd49fc7df-768qp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali95dd4e7b349 [] []}} ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-768qp" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.096 [INFO][5380] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-768qp" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.140 [INFO][5399] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" HandleID="k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.152 [INFO][5399] ipam_plugin.go 264: Auto assigning IP ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" HandleID="k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059ba50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-d50f1c7422", "pod":"calico-apiserver-cd49fc7df-768qp", "timestamp":"2024-06-25 18:45:45.140070074 +0000 UTC"}, Hostname:"ci-4012.0.0-a-d50f1c7422", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.152 [INFO][5399] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.155 [INFO][5399] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.155 [INFO][5399] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-d50f1c7422' Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.157 [INFO][5399] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.164 [INFO][5399] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.173 [INFO][5399] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.177 [INFO][5399] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.189 [INFO][5399] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.189 [INFO][5399] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.196 [INFO][5399] ipam.go 1685: Creating new handle: k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927 Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.201 [INFO][5399] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.208 [INFO][5399] ipam.go 1216: Successfully claimed IPs: [192.168.40.6/26] block=192.168.40.0/26 handle="k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.208 [INFO][5399] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.6/26] handle="k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" host="ci-4012.0.0-a-d50f1c7422" Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.208 [INFO][5399] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:45.233628 containerd[1716]: 2024-06-25 18:45:45.209 [INFO][5399] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.40.6/26] IPv6=[] ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" HandleID="k8s-pod-network.faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Workload="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" Jun 25 18:45:45.234609 containerd[1716]: 2024-06-25 18:45:45.210 [INFO][5380] k8s.go 386: Populated endpoint ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-768qp" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0", GenerateName:"calico-apiserver-cd49fc7df-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4372920-7239-4477-97bf-f1f8e2341b7a", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd49fc7df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"", Pod:"calico-apiserver-cd49fc7df-768qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95dd4e7b349", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:45.234609 containerd[1716]: 2024-06-25 18:45:45.211 [INFO][5380] k8s.go 387: Calico CNI using IPs: [192.168.40.6/32] ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-768qp" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" Jun 25 18:45:45.234609 containerd[1716]: 2024-06-25 18:45:45.211 [INFO][5380] dataplane_linux.go 68: Setting the host side veth name to cali95dd4e7b349 ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-768qp" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" Jun 25 18:45:45.234609 containerd[1716]: 2024-06-25 18:45:45.218 [INFO][5380] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-768qp" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" Jun 25 18:45:45.234609 containerd[1716]: 2024-06-25 18:45:45.219 [INFO][5380] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-768qp" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0", GenerateName:"calico-apiserver-cd49fc7df-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4372920-7239-4477-97bf-f1f8e2341b7a", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd49fc7df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-d50f1c7422", ContainerID:"faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927", Pod:"calico-apiserver-cd49fc7df-768qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95dd4e7b349", MAC:"a6:3e:70:b6:e7:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:45.234609 containerd[1716]: 2024-06-25 18:45:45.229 [INFO][5380] k8s.go 500: Wrote updated endpoint to datastore ContainerID="faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927" Namespace="calico-apiserver" Pod="calico-apiserver-cd49fc7df-768qp" WorkloadEndpoint="ci--4012.0.0--a--d50f1c7422-k8s-calico--apiserver--cd49fc7df--768qp-eth0" Jun 25 18:45:45.326885 containerd[1716]: time="2024-06-25T18:45:45.326321323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:45.326885 containerd[1716]: time="2024-06-25T18:45:45.326434923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:45.326885 containerd[1716]: time="2024-06-25T18:45:45.326526123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:45.326885 containerd[1716]: time="2024-06-25T18:45:45.326563523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:45.332033 containerd[1716]: time="2024-06-25T18:45:45.331788225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:45.332033 containerd[1716]: time="2024-06-25T18:45:45.331879925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:45.333167 containerd[1716]: time="2024-06-25T18:45:45.333100725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:45.333386 containerd[1716]: time="2024-06-25T18:45:45.333331425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:45.353662 systemd[1]: Started cri-containerd-9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222.scope - libcontainer container 9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222. Jun 25 18:45:45.358234 systemd[1]: Started cri-containerd-faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927.scope - libcontainer container faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927. Jun 25 18:45:45.448482 containerd[1716]: time="2024-06-25T18:45:45.448392855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd49fc7df-768qp,Uid:d4372920-7239-4477-97bf-f1f8e2341b7a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927\"" Jun 25 18:45:45.453641 containerd[1716]: time="2024-06-25T18:45:45.453510457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:45:45.454744 containerd[1716]: time="2024-06-25T18:45:45.454711557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd49fc7df-5d7qk,Uid:a86d8fac-629d-4b2f-83ea-093bfce5370b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222\"" Jun 25 18:45:46.328567 systemd-networkd[1359]: cali95dd4e7b349: Gained IPv6LL Jun 25 18:45:47.096627 systemd-networkd[1359]: cali9ddc22ac9a4: Gained IPv6LL Jun 25 18:45:49.024766 containerd[1716]: time="2024-06-25T18:45:49.024718199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:49.026733 containerd[1716]: time="2024-06-25T18:45:49.026692700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:45:49.029879 containerd[1716]: time="2024-06-25T18:45:49.029784500Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:49.034014 containerd[1716]: time="2024-06-25T18:45:49.033961001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:49.034808 containerd[1716]: time="2024-06-25T18:45:49.034684302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.581133245s" Jun 25 18:45:49.034808 containerd[1716]: time="2024-06-25T18:45:49.034722102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:45:49.036171 containerd[1716]: time="2024-06-25T18:45:49.035963202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:45:49.037289 containerd[1716]: time="2024-06-25T18:45:49.037211102Z" level=info msg="CreateContainer within sandbox \"faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:45:49.081481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898906923.mount: Deactivated successfully. Jun 25 18:45:49.089136 containerd[1716]: time="2024-06-25T18:45:49.089092916Z" level=info msg="CreateContainer within sandbox \"faccb8e116f56c90670ed77ea206bd02b21539af2c495291b5c2a9ab2e01d927\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de6a4d903b7d1747b84f31c271ca521bf2c03ee55200a89034876885fdd6da59\"" Jun 25 18:45:49.089871 containerd[1716]: time="2024-06-25T18:45:49.089838816Z" level=info msg="StartContainer for \"de6a4d903b7d1747b84f31c271ca521bf2c03ee55200a89034876885fdd6da59\"" Jun 25 18:45:49.136647 systemd[1]: Started cri-containerd-de6a4d903b7d1747b84f31c271ca521bf2c03ee55200a89034876885fdd6da59.scope - libcontainer container de6a4d903b7d1747b84f31c271ca521bf2c03ee55200a89034876885fdd6da59. Jun 25 18:45:49.180827 containerd[1716]: time="2024-06-25T18:45:49.180761065Z" level=info msg="StartContainer for \"de6a4d903b7d1747b84f31c271ca521bf2c03ee55200a89034876885fdd6da59\" returns successfully" Jun 25 18:45:49.389849 kubelet[3241]: I0625 18:45:49.389608 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-cd49fc7df-768qp" podStartSLOduration=1.806205495 podStartE2EDuration="5.389561141s" podCreationTimestamp="2024-06-25 18:45:44 +0000 UTC" firstStartedPulling="2024-06-25 18:45:45.451886256 +0000 UTC m=+75.479094563" lastFinishedPulling="2024-06-25 18:45:49.035241902 +0000 UTC m=+79.062450209" observedRunningTime="2024-06-25 18:45:49.387332139 +0000 UTC m=+79.414540446" watchObservedRunningTime="2024-06-25 18:45:49.389561141 +0000 UTC m=+79.416769448" Jun 25 18:45:49.513727 containerd[1716]: time="2024-06-25T18:45:49.512845845Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:49.515408 containerd[1716]: time="2024-06-25T18:45:49.515360248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 18:45:49.518515 containerd[1716]: time="2024-06-25T18:45:49.518331950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 482.333648ms" Jun 25 18:45:49.518515 containerd[1716]: time="2024-06-25T18:45:49.518385850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:45:49.522114 containerd[1716]: time="2024-06-25T18:45:49.522084753Z" level=info msg="CreateContainer within sandbox \"9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:45:49.562115 containerd[1716]: time="2024-06-25T18:45:49.562021787Z" level=info msg="CreateContainer within sandbox \"9940998183993f63102ba6430b448f70462292fb1d6ce72cd628cca0068cd222\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e877ac853a7b7aed2708871a3ea170c9a164251fb4d1173c86df7258420978f4\"" Jun 25 18:45:49.565164 containerd[1716]: time="2024-06-25T18:45:49.563939789Z" level=info msg="StartContainer for \"e877ac853a7b7aed2708871a3ea170c9a164251fb4d1173c86df7258420978f4\"" Jun 25 18:45:49.595453 systemd[1]: Started cri-containerd-e877ac853a7b7aed2708871a3ea170c9a164251fb4d1173c86df7258420978f4.scope - libcontainer container e877ac853a7b7aed2708871a3ea170c9a164251fb4d1173c86df7258420978f4. Jun 25 18:45:49.661120 containerd[1716]: time="2024-06-25T18:45:49.661003771Z" level=info msg="StartContainer for \"e877ac853a7b7aed2708871a3ea170c9a164251fb4d1173c86df7258420978f4\" returns successfully" Jun 25 18:45:50.414398 kubelet[3241]: I0625 18:45:50.414009 3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-cd49fc7df-5d7qk" podStartSLOduration=2.351261064 podStartE2EDuration="6.413962657s" podCreationTimestamp="2024-06-25 18:45:44 +0000 UTC" firstStartedPulling="2024-06-25 18:45:45.455918457 +0000 UTC m=+75.483126764" lastFinishedPulling="2024-06-25 18:45:49.51862005 +0000 UTC m=+79.545828357" observedRunningTime="2024-06-25 18:45:50.400579965 +0000 UTC m=+80.427788272" watchObservedRunningTime="2024-06-25 18:45:50.413962657 +0000 UTC m=+80.441170964" Jun 25 18:46:10.893382 systemd[1]: run-containerd-runc-k8s.io-d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f-runc.6pepxA.mount: Deactivated successfully. Jun 25 18:46:39.208221 systemd[1]: run-containerd-runc-k8s.io-a60dacbcb85b605e440e11f5c3a0be4245363cdb71ccaa9fa1b6f61ea5e06f85-runc.Ca8bdy.mount: Deactivated successfully. Jun 25 18:46:40.892976 systemd[1]: run-containerd-runc-k8s.io-d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f-runc.G2MHwb.mount: Deactivated successfully. Jun 25 18:47:00.688569 systemd[1]: Started sshd@7-10.200.8.40:22-10.200.16.10:50376.service - OpenSSH per-connection server daemon (10.200.16.10:50376). Jun 25 18:47:01.343204 sshd[5791]: Accepted publickey for core from 10.200.16.10 port 50376 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:01.344840 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:01.349369 systemd-logind[1697]: New session 10 of user core. Jun 25 18:47:01.356423 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:47:01.903575 sshd[5791]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:01.907917 systemd[1]: sshd@7-10.200.8.40:22-10.200.16.10:50376.service: Deactivated successfully. Jun 25 18:47:01.910119 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:47:01.910953 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:47:01.912016 systemd-logind[1697]: Removed session 10. Jun 25 18:47:07.045695 systemd[1]: Started sshd@8-10.200.8.40:22-10.200.16.10:59172.service - OpenSSH per-connection server daemon (10.200.16.10:59172). Jun 25 18:47:07.693082 sshd[5811]: Accepted publickey for core from 10.200.16.10 port 59172 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:07.694668 sshd[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:07.699421 systemd-logind[1697]: New session 11 of user core. Jun 25 18:47:07.704441 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:47:08.217231 sshd[5811]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:08.220847 systemd[1]: sshd@8-10.200.8.40:22-10.200.16.10:59172.service: Deactivated successfully. Jun 25 18:47:08.223572 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:47:08.225689 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:47:08.226887 systemd-logind[1697]: Removed session 11. Jun 25 18:47:13.337515 systemd[1]: Started sshd@9-10.200.8.40:22-10.200.16.10:59186.service - OpenSSH per-connection server daemon (10.200.16.10:59186). Jun 25 18:47:13.994450 sshd[5870]: Accepted publickey for core from 10.200.16.10 port 59186 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:13.996297 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:14.001882 systemd-logind[1697]: New session 12 of user core. Jun 25 18:47:14.008446 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:47:14.513387 sshd[5870]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:14.517066 systemd[1]: sshd@9-10.200.8.40:22-10.200.16.10:59186.service: Deactivated successfully. Jun 25 18:47:14.519846 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:47:14.522048 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:47:14.523073 systemd-logind[1697]: Removed session 12. Jun 25 18:47:15.638662 systemd[1]: run-containerd-runc-k8s.io-d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f-runc.2HUcr7.mount: Deactivated successfully. Jun 25 18:47:19.628613 systemd[1]: Started sshd@10-10.200.8.40:22-10.200.16.10:44816.service - OpenSSH per-connection server daemon (10.200.16.10:44816). Jun 25 18:47:20.301891 sshd[5904]: Accepted publickey for core from 10.200.16.10 port 44816 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:20.303481 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:20.308070 systemd-logind[1697]: New session 13 of user core. Jun 25 18:47:20.314427 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:47:20.815098 sshd[5904]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:20.819824 systemd[1]: sshd@10-10.200.8.40:22-10.200.16.10:44816.service: Deactivated successfully. Jun 25 18:47:20.822223 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:47:20.823053 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:47:20.823974 systemd-logind[1697]: Removed session 13. Jun 25 18:47:20.926372 systemd[1]: Started sshd@11-10.200.8.40:22-10.200.16.10:44828.service - OpenSSH per-connection server daemon (10.200.16.10:44828). Jun 25 18:47:21.602852 sshd[5918]: Accepted publickey for core from 10.200.16.10 port 44828 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:21.604347 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:21.608983 systemd-logind[1697]: New session 14 of user core. Jun 25 18:47:21.614427 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:47:22.142977 sshd[5918]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:22.146041 systemd[1]: sshd@11-10.200.8.40:22-10.200.16.10:44828.service: Deactivated successfully. Jun 25 18:47:22.148353 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:47:22.149884 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:47:22.151219 systemd-logind[1697]: Removed session 14. Jun 25 18:47:22.260583 systemd[1]: Started sshd@12-10.200.8.40:22-10.200.16.10:44836.service - OpenSSH per-connection server daemon (10.200.16.10:44836). Jun 25 18:47:22.904625 sshd[5928]: Accepted publickey for core from 10.200.16.10 port 44836 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:22.906258 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:22.913076 systemd-logind[1697]: New session 15 of user core. Jun 25 18:47:22.918755 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:47:23.421000 sshd[5928]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:23.423967 systemd[1]: sshd@12-10.200.8.40:22-10.200.16.10:44836.service: Deactivated successfully. Jun 25 18:47:23.426232 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:47:23.428237 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:47:23.429783 systemd-logind[1697]: Removed session 15. Jun 25 18:47:28.535668 systemd[1]: Started sshd@13-10.200.8.40:22-10.200.16.10:58300.service - OpenSSH per-connection server daemon (10.200.16.10:58300). Jun 25 18:47:29.181772 sshd[5946]: Accepted publickey for core from 10.200.16.10 port 58300 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:29.183300 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:29.187391 systemd-logind[1697]: New session 16 of user core. Jun 25 18:47:29.190525 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:47:29.716288 sshd[5946]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:29.719430 systemd[1]: sshd@13-10.200.8.40:22-10.200.16.10:58300.service: Deactivated successfully. Jun 25 18:47:29.721764 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:47:29.723506 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:47:29.724919 systemd-logind[1697]: Removed session 16. Jun 25 18:47:34.833545 systemd[1]: Started sshd@14-10.200.8.40:22-10.200.16.10:45174.service - OpenSSH per-connection server daemon (10.200.16.10:45174). Jun 25 18:47:35.515688 sshd[5969]: Accepted publickey for core from 10.200.16.10 port 45174 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:35.517296 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:35.521390 systemd-logind[1697]: New session 17 of user core. Jun 25 18:47:35.526424 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:47:36.027471 sshd[5969]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:36.031515 systemd[1]: sshd@14-10.200.8.40:22-10.200.16.10:45174.service: Deactivated successfully. Jun 25 18:47:36.033588 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:47:36.034429 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:47:36.035486 systemd-logind[1697]: Removed session 17. Jun 25 18:47:40.895089 systemd[1]: run-containerd-runc-k8s.io-d694b84bd02320730c84977e15d25e1f384ac28fe50b6ac3da1d3fc1e77ef31f-runc.txLqEi.mount: Deactivated successfully. Jun 25 18:47:41.149623 systemd[1]: Started sshd@15-10.200.8.40:22-10.200.16.10:45186.service - OpenSSH per-connection server daemon (10.200.16.10:45186). Jun 25 18:47:41.789608 sshd[6023]: Accepted publickey for core from 10.200.16.10 port 45186 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:41.791159 sshd[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:41.796349 systemd-logind[1697]: New session 18 of user core. Jun 25 18:47:41.805426 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:47:42.300776 sshd[6023]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:42.305480 systemd[1]: sshd@15-10.200.8.40:22-10.200.16.10:45186.service: Deactivated successfully. Jun 25 18:47:42.308147 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:47:42.309134 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:47:42.310285 systemd-logind[1697]: Removed session 18. Jun 25 18:47:47.419566 systemd[1]: Started sshd@16-10.200.8.40:22-10.200.16.10:41682.service - OpenSSH per-connection server daemon (10.200.16.10:41682). Jun 25 18:47:48.059120 sshd[6043]: Accepted publickey for core from 10.200.16.10 port 41682 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:48.060718 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:48.068029 systemd-logind[1697]: New session 19 of user core. Jun 25 18:47:48.072466 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:47:48.572667 sshd[6043]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:48.577236 systemd[1]: sshd@16-10.200.8.40:22-10.200.16.10:41682.service: Deactivated successfully. Jun 25 18:47:48.579472 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:47:48.580229 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:47:48.581317 systemd-logind[1697]: Removed session 19. Jun 25 18:47:48.690568 systemd[1]: Started sshd@17-10.200.8.40:22-10.200.16.10:41688.service - OpenSSH per-connection server daemon (10.200.16.10:41688). Jun 25 18:47:49.328891 sshd[6056]: Accepted publickey for core from 10.200.16.10 port 41688 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:49.330698 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:49.335549 systemd-logind[1697]: New session 20 of user core. Jun 25 18:47:49.339599 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:47:49.928867 sshd[6056]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:49.932959 systemd[1]: sshd@17-10.200.8.40:22-10.200.16.10:41688.service: Deactivated successfully. Jun 25 18:47:49.935090 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:47:49.936137 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:47:49.937130 systemd-logind[1697]: Removed session 20. Jun 25 18:47:50.046595 systemd[1]: Started sshd@18-10.200.8.40:22-10.200.16.10:41700.service - OpenSSH per-connection server daemon (10.200.16.10:41700). Jun 25 18:47:50.690670 sshd[6066]: Accepted publickey for core from 10.200.16.10 port 41700 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:50.692188 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:50.696326 systemd-logind[1697]: New session 21 of user core. Jun 25 18:47:50.703435 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:47:52.746383 sshd[6066]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:52.751058 systemd[1]: sshd@18-10.200.8.40:22-10.200.16.10:41700.service: Deactivated successfully. Jun 25 18:47:52.753855 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:47:52.754906 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:47:52.756192 systemd-logind[1697]: Removed session 21. Jun 25 18:47:52.861435 systemd[1]: Started sshd@19-10.200.8.40:22-10.200.16.10:41702.service - OpenSSH per-connection server daemon (10.200.16.10:41702). Jun 25 18:47:53.509309 sshd[6084]: Accepted publickey for core from 10.200.16.10 port 41702 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:53.510801 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:53.515419 systemd-logind[1697]: New session 22 of user core. Jun 25 18:47:53.521435 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:47:54.127279 sshd[6084]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:54.131298 systemd[1]: sshd@19-10.200.8.40:22-10.200.16.10:41702.service: Deactivated successfully. Jun 25 18:47:54.133531 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:47:54.134432 systemd-logind[1697]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:47:54.135702 systemd-logind[1697]: Removed session 22. Jun 25 18:47:54.242155 systemd[1]: Started sshd@20-10.200.8.40:22-10.200.16.10:41712.service - OpenSSH per-connection server daemon (10.200.16.10:41712). Jun 25 18:47:54.898729 sshd[6100]: Accepted publickey for core from 10.200.16.10 port 41712 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:47:54.901957 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:54.907246 systemd-logind[1697]: New session 23 of user core. Jun 25 18:47:54.913435 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:47:55.418744 sshd[6100]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:55.422252 systemd[1]: sshd@20-10.200.8.40:22-10.200.16.10:41712.service: Deactivated successfully. Jun 25 18:47:55.424954 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:47:55.427237 systemd-logind[1697]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:47:55.428488 systemd-logind[1697]: Removed session 23. Jun 25 18:48:00.537555 systemd[1]: Started sshd@21-10.200.8.40:22-10.200.16.10:38958.service - OpenSSH per-connection server daemon (10.200.16.10:38958). Jun 25 18:48:01.199360 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 38958 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:01.200860 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:01.208850 systemd-logind[1697]: New session 24 of user core. Jun 25 18:48:01.212551 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:48:01.709415 sshd[6114]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:01.712399 systemd[1]: sshd@21-10.200.8.40:22-10.200.16.10:38958.service: Deactivated successfully. Jun 25 18:48:01.714577 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:48:01.716133 systemd-logind[1697]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:48:01.717070 systemd-logind[1697]: Removed session 24. Jun 25 18:48:06.824295 systemd[1]: Started sshd@22-10.200.8.40:22-10.200.16.10:42416.service - OpenSSH per-connection server daemon (10.200.16.10:42416). Jun 25 18:48:07.473570 sshd[6138]: Accepted publickey for core from 10.200.16.10 port 42416 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:07.475349 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:07.479712 systemd-logind[1697]: New session 25 of user core. Jun 25 18:48:07.483418 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:48:07.983902 sshd[6138]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:07.987663 systemd[1]: sshd@22-10.200.8.40:22-10.200.16.10:42416.service: Deactivated successfully. Jun 25 18:48:07.990350 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:48:07.993340 systemd-logind[1697]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:48:07.994832 systemd-logind[1697]: Removed session 25. Jun 25 18:48:13.103429 systemd[1]: Started sshd@23-10.200.8.40:22-10.200.16.10:42418.service - OpenSSH per-connection server daemon (10.200.16.10:42418). Jun 25 18:48:13.769302 sshd[6202]: Accepted publickey for core from 10.200.16.10 port 42418 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:13.770927 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:13.776020 systemd-logind[1697]: New session 26 of user core. Jun 25 18:48:13.780419 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:48:14.281783 sshd[6202]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:14.286190 systemd[1]: sshd@23-10.200.8.40:22-10.200.16.10:42418.service: Deactivated successfully. Jun 25 18:48:14.288448 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:48:14.289211 systemd-logind[1697]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:48:14.290198 systemd-logind[1697]: Removed session 26. Jun 25 18:48:19.397405 systemd[1]: Started sshd@24-10.200.8.40:22-10.200.16.10:38878.service - OpenSSH per-connection server daemon (10.200.16.10:38878). Jun 25 18:48:20.043987 sshd[6241]: Accepted publickey for core from 10.200.16.10 port 38878 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:20.045606 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:20.050150 systemd-logind[1697]: New session 27 of user core. Jun 25 18:48:20.056447 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:48:20.556172 sshd[6241]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:20.559173 systemd[1]: sshd@24-10.200.8.40:22-10.200.16.10:38878.service: Deactivated successfully. Jun 25 18:48:20.561501 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:48:20.563174 systemd-logind[1697]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:48:20.564241 systemd-logind[1697]: Removed session 27. Jun 25 18:48:25.670393 systemd[1]: Started sshd@25-10.200.8.40:22-10.200.16.10:40638.service - OpenSSH per-connection server daemon (10.200.16.10:40638). Jun 25 18:48:26.319967 sshd[6261]: Accepted publickey for core from 10.200.16.10 port 40638 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:26.321569 sshd[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:26.326337 systemd-logind[1697]: New session 28 of user core. Jun 25 18:48:26.330422 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:48:26.841128 sshd[6261]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:26.844682 systemd[1]: sshd@25-10.200.8.40:22-10.200.16.10:40638.service: Deactivated successfully. Jun 25 18:48:26.847404 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:48:26.849206 systemd-logind[1697]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:48:26.850289 systemd-logind[1697]: Removed session 28. Jun 25 18:48:31.958381 systemd[1]: Started sshd@26-10.200.8.40:22-10.200.16.10:40642.service - OpenSSH per-connection server daemon (10.200.16.10:40642). Jun 25 18:48:32.609929 sshd[6288]: Accepted publickey for core from 10.200.16.10 port 40642 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:32.611800 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:32.616221 systemd-logind[1697]: New session 29 of user core. Jun 25 18:48:32.622410 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 18:48:33.121060 sshd[6288]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:33.124199 systemd[1]: sshd@26-10.200.8.40:22-10.200.16.10:40642.service: Deactivated successfully. Jun 25 18:48:33.126522 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 18:48:33.128200 systemd-logind[1697]: Session 29 logged out. Waiting for processes to exit. Jun 25 18:48:33.129378 systemd-logind[1697]: Removed session 29. Jun 25 18:48:38.239231 systemd[1]: Started sshd@27-10.200.8.40:22-10.200.16.10:38870.service - OpenSSH per-connection server daemon (10.200.16.10:38870). Jun 25 18:48:38.896538 sshd[6306]: Accepted publickey for core from 10.200.16.10 port 38870 ssh2: RSA SHA256:6GCBd73KL+McRp5QTtApIR7SCNpbaQE6beYJZLfpxAQ Jun 25 18:48:38.898313 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:38.904175 systemd-logind[1697]: New session 30 of user core. Jun 25 18:48:38.910422 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 18:48:39.411776 sshd[6306]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:39.415955 systemd[1]: sshd@27-10.200.8.40:22-10.200.16.10:38870.service: Deactivated successfully. Jun 25 18:48:39.418004 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 18:48:39.418788 systemd-logind[1697]: Session 30 logged out. Waiting for processes to exit. Jun 25 18:48:39.419840 systemd-logind[1697]: Removed session 30. Jun 25 18:48:53.447636 systemd[1]: cri-containerd-aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573.scope: Deactivated successfully. Jun 25 18:48:53.447987 systemd[1]: cri-containerd-aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573.scope: Consumed 5.102s CPU time. Jun 25 18:48:53.476853 containerd[1716]: time="2024-06-25T18:48:53.474332486Z" level=info msg="shim disconnected" id=aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573 namespace=k8s.io Jun 25 18:48:53.476853 containerd[1716]: time="2024-06-25T18:48:53.474415487Z" level=warning msg="cleaning up after shim disconnected" id=aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573 namespace=k8s.io Jun 25 18:48:53.476853 containerd[1716]: time="2024-06-25T18:48:53.474432087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:53.476631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573-rootfs.mount: Deactivated successfully. Jun 25 18:48:53.778152 kubelet[3241]: I0625 18:48:53.778030 3241 scope.go:117] "RemoveContainer" containerID="aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573" Jun 25 18:48:53.780657 containerd[1716]: time="2024-06-25T18:48:53.780614910Z" level=info msg="CreateContainer within sandbox \"685d1ce91e5691bd4cc2b2ecf8af79a7f81310f1afa3194605e8844cbd08b496\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 18:48:53.814726 containerd[1716]: time="2024-06-25T18:48:53.814677380Z" level=info msg="CreateContainer within sandbox \"685d1ce91e5691bd4cc2b2ecf8af79a7f81310f1afa3194605e8844cbd08b496\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532\"" Jun 25 18:48:53.815404 containerd[1716]: time="2024-06-25T18:48:53.815346287Z" level=info msg="StartContainer for \"0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532\"" Jun 25 18:48:53.849505 systemd[1]: run-containerd-runc-k8s.io-0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532-runc.XHSMTN.mount: Deactivated successfully. Jun 25 18:48:53.857432 systemd[1]: Started cri-containerd-0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532.scope - libcontainer container 0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532. Jun 25 18:48:53.885132 containerd[1716]: time="2024-06-25T18:48:53.884323936Z" level=info msg="StartContainer for \"0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532\" returns successfully" Jun 25 18:48:54.019794 systemd[1]: cri-containerd-96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f.scope: Deactivated successfully. Jun 25 18:48:54.020542 systemd[1]: cri-containerd-96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f.scope: Consumed 3.842s CPU time, 22.7M memory peak, 0B memory swap peak. Jun 25 18:48:54.043145 containerd[1716]: time="2024-06-25T18:48:54.042992157Z" level=info msg="shim disconnected" id=96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f namespace=k8s.io Jun 25 18:48:54.043145 containerd[1716]: time="2024-06-25T18:48:54.043052458Z" level=warning msg="cleaning up after shim disconnected" id=96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f namespace=k8s.io Jun 25 18:48:54.043145 containerd[1716]: time="2024-06-25T18:48:54.043066258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:54.477096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f-rootfs.mount: Deactivated successfully. Jun 25 18:48:54.783066 kubelet[3241]: I0625 18:48:54.782181 3241 scope.go:117] "RemoveContainer" containerID="96b3a31cd6f98683b85116361caed935be37d4e3b98d5b03c374457ddf437b7f" Jun 25 18:48:54.785795 containerd[1716]: time="2024-06-25T18:48:54.785743579Z" level=info msg="CreateContainer within sandbox \"2c4f8a796288c731a78da22cbd1c79327176230a1dcb48c8474b52818c99c886\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 18:48:54.818720 containerd[1716]: time="2024-06-25T18:48:54.818682361Z" level=info msg="CreateContainer within sandbox \"2c4f8a796288c731a78da22cbd1c79327176230a1dcb48c8474b52818c99c886\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9bfa632a889c86f328118d3a97b45f55811c8d2353d0fd49b2cc4e863d370a03\"" Jun 25 18:48:54.819222 containerd[1716]: time="2024-06-25T18:48:54.819186347Z" level=info msg="StartContainer for \"9bfa632a889c86f328118d3a97b45f55811c8d2353d0fd49b2cc4e863d370a03\"" Jun 25 18:48:54.853467 systemd[1]: Started cri-containerd-9bfa632a889c86f328118d3a97b45f55811c8d2353d0fd49b2cc4e863d370a03.scope - libcontainer container 9bfa632a889c86f328118d3a97b45f55811c8d2353d0fd49b2cc4e863d370a03. Jun 25 18:48:54.899386 containerd[1716]: time="2024-06-25T18:48:54.899333114Z" level=info msg="StartContainer for \"9bfa632a889c86f328118d3a97b45f55811c8d2353d0fd49b2cc4e863d370a03\" returns successfully" Jun 25 18:48:58.214221 kubelet[3241]: E0625 18:48:58.214104 3241 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4012.0.0-a-d50f1c7422)" Jun 25 18:48:58.456411 systemd[1]: cri-containerd-1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5.scope: Deactivated successfully. Jun 25 18:48:58.457224 systemd[1]: cri-containerd-1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5.scope: Consumed 2.246s CPU time, 15.9M memory peak, 0B memory swap peak. Jun 25 18:48:58.480949 containerd[1716]: time="2024-06-25T18:48:58.480658113Z" level=info msg="shim disconnected" id=1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5 namespace=k8s.io Jun 25 18:48:58.480949 containerd[1716]: time="2024-06-25T18:48:58.480731514Z" level=warning msg="cleaning up after shim disconnected" id=1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5 namespace=k8s.io Jun 25 18:48:58.480949 containerd[1716]: time="2024-06-25T18:48:58.480742814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:58.482941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5-rootfs.mount: Deactivated successfully. Jun 25 18:48:58.521413 kubelet[3241]: E0625 18:48:58.521381 3241 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:44900->10.200.8.16:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4012.0.0-a-d50f1c7422.17dc53b08c2be0af kube-system 994 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4012.0.0-a-d50f1c7422,UID:c2217986557928b946199e82fe17737e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4012.0.0-a-d50f1c7422,},FirstTimestamp:2024-06-25 18:46:15 +0000 UTC,LastTimestamp:2024-06-25 18:48:48.070469378 +0000 UTC m=+258.097677685,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.0.0-a-d50f1c7422,}" Jun 25 18:48:58.716053 kubelet[3241]: E0625 18:48:58.715676 3241 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:45088->10.200.8.16:2379: read: connection timed out" Jun 25 18:48:58.798078 kubelet[3241]: I0625 18:48:58.797662 3241 scope.go:117] "RemoveContainer" containerID="1ae0d7a609f3643228e36b47177881e3f0c924ef2d509b3699326de2aa3b9eb5" Jun 25 18:48:58.799637 containerd[1716]: time="2024-06-25T18:48:58.799601494Z" level=info msg="CreateContainer within sandbox \"f6208e605821b1f35ada3470623b1349daa3728509d92030d2444aebc5f49471\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 18:48:58.833542 containerd[1716]: time="2024-06-25T18:48:58.833503800Z" level=info msg="CreateContainer within sandbox \"f6208e605821b1f35ada3470623b1349daa3728509d92030d2444aebc5f49471\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"535026f9a451a982992ab9f6aa8e2c0c54db0c86bf817a9e22f604124b0514fe\"" Jun 25 18:48:58.834057 containerd[1716]: time="2024-06-25T18:48:58.833957704Z" level=info msg="StartContainer for \"535026f9a451a982992ab9f6aa8e2c0c54db0c86bf817a9e22f604124b0514fe\"" Jun 25 18:48:58.868427 systemd[1]: Started cri-containerd-535026f9a451a982992ab9f6aa8e2c0c54db0c86bf817a9e22f604124b0514fe.scope - libcontainer container 535026f9a451a982992ab9f6aa8e2c0c54db0c86bf817a9e22f604124b0514fe. Jun 25 18:48:58.912383 containerd[1716]: time="2024-06-25T18:48:58.912336712Z" level=info msg="StartContainer for \"535026f9a451a982992ab9f6aa8e2c0c54db0c86bf817a9e22f604124b0514fe\" returns successfully" Jun 25 18:49:04.217938 kubelet[3241]: I0625 18:49:04.217890 3241 status_manager.go:853] "Failed to get status for pod" podUID="8bf2877f-2e27-414d-a577-1d8d70a983c9" pod="tigera-operator/tigera-operator-76c4974c85-zjd4f" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:44988->10.200.8.16:2379: read: connection timed out" Jun 25 18:49:05.373153 systemd[1]: cri-containerd-0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532.scope: Deactivated successfully. Jun 25 18:49:05.397349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532-rootfs.mount: Deactivated successfully. Jun 25 18:49:05.422692 containerd[1716]: time="2024-06-25T18:49:05.422621651Z" level=info msg="shim disconnected" id=0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532 namespace=k8s.io Jun 25 18:49:05.422692 containerd[1716]: time="2024-06-25T18:49:05.422687152Z" level=warning msg="cleaning up after shim disconnected" id=0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532 namespace=k8s.io Jun 25 18:49:05.423301 containerd[1716]: time="2024-06-25T18:49:05.422698752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:05.818468 kubelet[3241]: I0625 18:49:05.818365 3241 scope.go:117] "RemoveContainer" containerID="aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573" Jun 25 18:49:05.819427 kubelet[3241]: I0625 18:49:05.818656 3241 scope.go:117] "RemoveContainer" containerID="0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532" Jun 25 18:49:05.819728 kubelet[3241]: E0625 18:49:05.819563 3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-zjd4f_tigera-operator(8bf2877f-2e27-414d-a577-1d8d70a983c9)\"" pod="tigera-operator/tigera-operator-76c4974c85-zjd4f" podUID="8bf2877f-2e27-414d-a577-1d8d70a983c9" Jun 25 18:49:05.819973 containerd[1716]: time="2024-06-25T18:49:05.819909698Z" level=info msg="RemoveContainer for \"aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573\"" Jun 25 18:49:05.829214 containerd[1716]: time="2024-06-25T18:49:05.829174056Z" level=info msg="RemoveContainer for \"aa1af2b7c455330f9cee526350a61d84c7130c336355b1210e810312c72fb573\" returns successfully" Jun 25 18:49:08.716552 kubelet[3241]: E0625 18:49:08.716425 3241 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-d50f1c7422?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:49:18.718472 kubelet[3241]: E0625 18:49:18.718088 3241 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-d50f1c7422?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:49:19.067388 kubelet[3241]: I0625 18:49:19.067214 3241 scope.go:117] "RemoveContainer" containerID="0ac20ae29bf20733d728e178c99158136c6719950d50ddfe6b31f5cb4b4d6532"