Nov 12 20:53:39.064528 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:53:39.064555 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:39.064567 kernel: BIOS-provided physical RAM map: Nov 12 20:53:39.064574 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:53:39.064579 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 12 20:53:39.064587 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 12 20:53:39.064596 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 12 20:53:39.064604 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 12 20:53:39.064612 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 12 20:53:39.064619 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 12 20:53:39.064626 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 12 20:53:39.064635 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 12 20:53:39.064641 kernel: printk: bootconsole [earlyser0] enabled Nov 12 20:53:39.064650 kernel: NX (Execute Disable) protection: active Nov 12 20:53:39.064662 kernel: APIC: Static calls initialized Nov 12 20:53:39.064670 kernel: efi: EFI v2.7 by Microsoft Nov 12 20:53:39.064679 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Nov 12 20:53:39.064687 kernel: SMBIOS 3.1.0 present. Nov 12 20:53:39.064696 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 12 20:53:39.064703 kernel: Hypervisor detected: Microsoft Hyper-V Nov 12 20:53:39.064710 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 12 20:53:39.064720 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Nov 12 20:53:39.064727 kernel: Hyper-V: Nested features: 0x1e0101 Nov 12 20:53:39.064734 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 12 20:53:39.064745 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 12 20:53:39.064752 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:53:39.064760 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:53:39.064770 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 12 20:53:39.064777 kernel: tsc: Detected 2593.905 MHz processor Nov 12 20:53:39.064785 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:53:39.064795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:53:39.064802 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 12 20:53:39.064810 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:53:39.064821 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:53:39.064828 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 12 20:53:39.064837 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 12 20:53:39.064846 kernel: Using GB pages for direct mapping Nov 12 20:53:39.064852 kernel: Secure boot disabled Nov 12 20:53:39.064861 kernel: ACPI: Early table checksum verification disabled Nov 12 20:53:39.064870 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 12 20:53:39.064880 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064893 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064900 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 12 20:53:39.064908 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 12 20:53:39.064918 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064926 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064935 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064946 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064953 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064962 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064971 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064978 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 12 20:53:39.064988 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 12 20:53:39.064997 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 12 20:53:39.065004 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 12 20:53:39.065016 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 12 20:53:39.065023 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 12 20:53:39.065031 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 12 20:53:39.065041 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 12 20:53:39.065048 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 12 20:53:39.065057 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 12 20:53:39.065067 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:53:39.065075 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:53:39.065085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 12 20:53:39.065097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 12 20:53:39.065105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 12 20:53:39.065115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 12 20:53:39.065124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 12 20:53:39.065132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 12 20:53:39.065143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 12 20:53:39.065150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 12 20:53:39.065159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 12 20:53:39.065168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 12 20:53:39.065178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 12 20:53:39.065188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 12 20:53:39.065196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 12 20:53:39.065203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 12 20:53:39.065211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 12 20:53:39.065221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 12 20:53:39.065229 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 12 20:53:39.065236 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 12 20:53:39.065243 kernel: Zone ranges: Nov 12 20:53:39.065254 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:53:39.065264 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:53:39.065273 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:53:39.065282 kernel: Movable zone start for each node Nov 12 20:53:39.065293 kernel: Early memory node ranges Nov 12 20:53:39.065302 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:53:39.065314 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 12 20:53:39.065323 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 12 20:53:39.065333 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:53:39.065345 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 12 20:53:39.065355 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:53:39.065362 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:53:39.065373 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 12 20:53:39.065380 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 12 20:53:39.065388 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 12 20:53:39.065407 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:53:39.065418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:53:39.065428 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:53:39.065440 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 12 20:53:39.065452 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:53:39.065464 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 12 20:53:39.065476 kernel: Booting paravirtualized kernel on Hyper-V Nov 12 20:53:39.065490 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:53:39.065502 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:53:39.065511 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:53:39.065525 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:53:39.065537 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:53:39.065554 kernel: Hyper-V: PV spinlocks enabled Nov 12 20:53:39.065567 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:53:39.065583 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:39.065596 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:53:39.065610 kernel: random: crng init done Nov 12 20:53:39.065625 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:53:39.065639 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:53:39.065654 kernel: Fallback order for Node 0: 0 Nov 12 20:53:39.065672 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 12 20:53:39.065697 kernel: Policy zone: Normal Nov 12 20:53:39.065716 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:53:39.065731 kernel: software IO TLB: area num 2. Nov 12 20:53:39.065747 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 310124K reserved, 0K cma-reserved) Nov 12 20:53:39.065763 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:53:39.065778 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:53:39.065794 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:53:39.065810 kernel: Dynamic Preempt: voluntary Nov 12 20:53:39.065825 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:53:39.065845 kernel: rcu: RCU event tracing is enabled. Nov 12 20:53:39.065863 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:53:39.065878 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:53:39.065892 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:53:39.065907 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:53:39.065920 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:53:39.065937 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:53:39.065952 kernel: Using NULL legacy PIC Nov 12 20:53:39.065964 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 12 20:53:39.065977 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:53:39.065990 kernel: Console: colour dummy device 80x25 Nov 12 20:53:39.066004 kernel: printk: console [tty1] enabled Nov 12 20:53:39.066019 kernel: printk: console [ttyS0] enabled Nov 12 20:53:39.066034 kernel: printk: bootconsole [earlyser0] disabled Nov 12 20:53:39.066049 kernel: ACPI: Core revision 20230628 Nov 12 20:53:39.066064 kernel: Failed to register legacy timer interrupt Nov 12 20:53:39.066082 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:53:39.066097 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 12 20:53:39.066112 kernel: Hyper-V: Using IPI hypercalls Nov 12 20:53:39.066127 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 12 20:53:39.066142 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 12 20:53:39.066157 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 12 20:53:39.066172 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 12 20:53:39.066186 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 12 20:53:39.066200 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 12 20:53:39.066217 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Nov 12 20:53:39.066231 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:53:39.066244 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:53:39.066259 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:53:39.066274 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:53:39.066289 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:53:39.066303 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:53:39.066318 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 12 20:53:39.066332 kernel: RETBleed: Vulnerable Nov 12 20:53:39.066351 kernel: Speculative Store Bypass: Vulnerable Nov 12 20:53:39.066365 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:53:39.066379 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:53:39.066392 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:53:39.066431 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:53:39.066446 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:53:39.066459 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:53:39.066472 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 12 20:53:39.066486 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 12 20:53:39.066500 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 12 20:53:39.066513 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:53:39.066529 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 12 20:53:39.066559 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 12 20:53:39.066573 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 12 20:53:39.066587 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 12 20:53:39.066600 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:53:39.066613 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:53:39.066628 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:53:39.066642 kernel: landlock: Up and running. Nov 12 20:53:39.066656 kernel: SELinux: Initializing. Nov 12 20:53:39.066670 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:53:39.066685 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:53:39.066699 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 12 20:53:39.066717 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:39.066731 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:39.066746 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:39.066762 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 12 20:53:39.066776 kernel: signal: max sigframe size: 3632 Nov 12 20:53:39.066792 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:53:39.066807 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:53:39.066822 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:53:39.066837 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:53:39.066855 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:53:39.066870 kernel: .... node #0, CPUs: #1 Nov 12 20:53:39.066886 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 12 20:53:39.066903 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:53:39.066917 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:53:39.066933 kernel: smpboot: Max logical packages: 1 Nov 12 20:53:39.066948 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 12 20:53:39.066963 kernel: devtmpfs: initialized Nov 12 20:53:39.066981 kernel: x86/mm: Memory block size: 128MB Nov 12 20:53:39.066996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 12 20:53:39.067012 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:53:39.067027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:53:39.067042 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:53:39.067057 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:53:39.067072 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:53:39.067088 kernel: audit: type=2000 audit(1731444818.028:1): state=initialized audit_enabled=0 res=1 Nov 12 20:53:39.067103 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:53:39.067121 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:53:39.067136 kernel: cpuidle: using governor menu Nov 12 20:53:39.067151 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:53:39.067166 kernel: dca service started, version 1.12.1 Nov 12 20:53:39.067181 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 12 20:53:39.067197 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:53:39.067212 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:53:39.067227 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:53:39.067242 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:53:39.067260 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:53:39.067276 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:53:39.067290 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:53:39.067305 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:53:39.067321 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:53:39.067335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:53:39.067351 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:53:39.067367 kernel: ACPI: Interpreter enabled Nov 12 20:53:39.067382 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:53:39.067416 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:53:39.067431 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:53:39.067446 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:53:39.067462 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 12 20:53:39.067477 kernel: iommu: Default domain type: Translated Nov 12 20:53:39.067492 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:53:39.067507 kernel: efivars: Registered efivars operations Nov 12 20:53:39.067521 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:53:39.067536 kernel: PCI: System does not support PCI Nov 12 20:53:39.067554 kernel: vgaarb: loaded Nov 12 20:53:39.067569 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 12 20:53:39.067584 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:53:39.067600 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:53:39.067615 kernel: pnp: PnP ACPI init Nov 12 20:53:39.067631 kernel: pnp: PnP ACPI: found 3 devices Nov 12 20:53:39.067646 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:53:39.067661 kernel: NET: Registered PF_INET protocol family Nov 12 20:53:39.067677 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:53:39.067695 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:53:39.067710 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:53:39.067725 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:53:39.067740 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:53:39.067755 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:53:39.067770 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:53:39.067785 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:53:39.067800 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:53:39.067815 kernel: NET: Registered PF_XDP protocol family Nov 12 20:53:39.067834 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:53:39.067849 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:53:39.067865 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Nov 12 20:53:39.067880 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:53:39.067895 kernel: Initialise system trusted keyrings Nov 12 20:53:39.067909 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:53:39.067924 kernel: Key type asymmetric registered Nov 12 20:53:39.067939 kernel: Asymmetric key parser 'x509' registered Nov 12 20:53:39.067953 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:53:39.067971 kernel: io scheduler mq-deadline registered Nov 12 20:53:39.067986 kernel: io scheduler kyber registered Nov 12 20:53:39.068001 kernel: io scheduler bfq registered Nov 12 20:53:39.068016 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:53:39.068031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:53:39.068045 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:53:39.068059 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:53:39.068075 kernel: i8042: PNP: No PS/2 controller found. Nov 12 20:53:39.068250 kernel: rtc_cmos 00:02: registered as rtc0 Nov 12 20:53:39.068417 kernel: rtc_cmos 00:02: setting system clock to 2024-11-12T20:53:38 UTC (1731444818) Nov 12 20:53:39.068563 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 12 20:53:39.068583 kernel: intel_pstate: CPU model not supported Nov 12 20:53:39.068598 kernel: efifb: probing for efifb Nov 12 20:53:39.068612 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 12 20:53:39.068625 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 12 20:53:39.068639 kernel: efifb: scrolling: redraw Nov 12 20:53:39.068657 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 12 20:53:39.068671 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:53:39.068685 kernel: fb0: EFI VGA frame buffer device Nov 12 20:53:39.068699 kernel: pstore: Using crash dump compression: deflate Nov 12 20:53:39.068713 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:53:39.068727 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:53:39.068742 kernel: Segment Routing with IPv6 Nov 12 20:53:39.068755 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:53:39.068769 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:53:39.068783 kernel: Key type dns_resolver registered Nov 12 20:53:39.068799 kernel: IPI shorthand broadcast: enabled Nov 12 20:53:39.068814 kernel: sched_clock: Marking stable (840002700, 48324800)->(1160748500, -272421000) Nov 12 20:53:39.068828 kernel: registered taskstats version 1 Nov 12 20:53:39.068844 kernel: Loading compiled-in X.509 certificates Nov 12 20:53:39.068860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:53:39.068874 kernel: Key type .fscrypt registered Nov 12 20:53:39.068888 kernel: Key type fscrypt-provisioning registered Nov 12 20:53:39.068902 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:53:39.068919 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:53:39.068934 kernel: ima: No architecture policies found Nov 12 20:53:39.068949 kernel: clk: Disabling unused clocks Nov 12 20:53:39.068964 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:53:39.068978 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:53:39.068993 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:53:39.069008 kernel: Run /init as init process Nov 12 20:53:39.069022 kernel: with arguments: Nov 12 20:53:39.069036 kernel: /init Nov 12 20:53:39.069053 kernel: with environment: Nov 12 20:53:39.069067 kernel: HOME=/ Nov 12 20:53:39.069081 kernel: TERM=linux Nov 12 20:53:39.069094 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:53:39.069112 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:39.069130 systemd[1]: Detected virtualization microsoft. Nov 12 20:53:39.069145 systemd[1]: Detected architecture x86-64. Nov 12 20:53:39.069160 systemd[1]: Running in initrd. Nov 12 20:53:39.069177 systemd[1]: No hostname configured, using default hostname. Nov 12 20:53:39.069192 systemd[1]: Hostname set to . Nov 12 20:53:39.069208 systemd[1]: Initializing machine ID from random generator. Nov 12 20:53:39.069223 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:53:39.069238 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:39.069254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:39.069271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:53:39.069286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:39.069304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:53:39.069320 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:53:39.069338 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:53:39.069354 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:53:39.069370 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:39.069386 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:39.069422 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:39.069441 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:39.069457 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:39.069473 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:39.069489 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:39.069504 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:39.069520 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:53:39.069536 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:53:39.069552 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:39.069568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:39.069586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:39.069602 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:39.069618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:53:39.069634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:39.069649 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:53:39.069665 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:53:39.069681 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:39.069694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:39.069744 systemd-journald[176]: Collecting audit messages is disabled. Nov 12 20:53:39.069780 systemd-journald[176]: Journal started Nov 12 20:53:39.069816 systemd-journald[176]: Runtime Journal (/run/log/journal/b9d9bd314bea40d89ae12a795b53be4d) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:53:39.076436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:39.087308 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:39.088144 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:39.094288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:39.101241 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:53:39.103643 systemd-modules-load[177]: Inserted module 'overlay' Nov 12 20:53:39.103825 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:39.122588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:39.127517 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:39.134481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:39.161420 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:53:39.164967 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:39.172267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:39.182162 kernel: Bridge firewalling registered Nov 12 20:53:39.182476 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 12 20:53:39.186535 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:53:39.194545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:39.200993 dracut-cmdline[200]: dracut-dracut-053 Nov 12 20:53:39.204640 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:39.203571 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:39.223158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:39.241132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:39.258221 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:39.265496 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:39.279611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:39.308419 kernel: SCSI subsystem initialized Nov 12 20:53:39.317926 systemd-resolved[279]: Positive Trust Anchors: Nov 12 20:53:39.317939 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:39.326617 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:53:39.317993 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:39.322932 systemd-resolved[279]: Defaulting to hostname 'linux'. Nov 12 20:53:39.346930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:39.350162 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:39.360560 kernel: iscsi: registered transport (tcp) Nov 12 20:53:39.382423 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:53:39.382471 kernel: QLogic iSCSI HBA Driver Nov 12 20:53:39.417202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:39.427518 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:53:39.458983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:53:39.459039 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:53:39.462598 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:53:39.502428 kernel: raid6: avx512x4 gen() 18342 MB/s Nov 12 20:53:39.521414 kernel: raid6: avx512x2 gen() 18377 MB/s Nov 12 20:53:39.540412 kernel: raid6: avx512x1 gen() 18333 MB/s Nov 12 20:53:39.559416 kernel: raid6: avx2x4 gen() 18312 MB/s Nov 12 20:53:39.578410 kernel: raid6: avx2x2 gen() 18279 MB/s Nov 12 20:53:39.598590 kernel: raid6: avx2x1 gen() 14058 MB/s Nov 12 20:53:39.598628 kernel: raid6: using algorithm avx512x2 gen() 18377 MB/s Nov 12 20:53:39.620064 kernel: raid6: .... xor() 30370 MB/s, rmw enabled Nov 12 20:53:39.620096 kernel: raid6: using avx512x2 recovery algorithm Nov 12 20:53:39.641418 kernel: xor: automatically using best checksumming function avx Nov 12 20:53:39.789424 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:53:39.798458 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:39.809804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:39.821472 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 12 20:53:39.825847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:39.840536 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:53:39.852053 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Nov 12 20:53:39.876190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:39.884532 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:39.922976 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:39.934592 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:53:39.970733 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:39.978103 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:39.993950 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:53:39.982106 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:39.985298 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:40.004680 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:53:40.017792 kernel: hv_vmbus: Vmbus version:5.2 Nov 12 20:53:40.026880 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:53:40.027158 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:40.038422 kernel: AES CTR mode by8 optimization enabled Nov 12 20:53:40.061263 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 12 20:53:40.061308 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 12 20:53:40.062796 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:40.064150 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:40.069669 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:40.078869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:40.079039 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:40.082082 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:40.102481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:40.118020 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 12 20:53:40.118046 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 12 20:53:40.121426 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 20:53:40.126073 kernel: hv_vmbus: registering driver hid_hyperv Nov 12 20:53:40.126616 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 12 20:53:40.126645 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 12 20:53:40.128421 kernel: hv_vmbus: registering driver hv_netvsc Nov 12 20:53:40.151419 kernel: PTP clock support registered Nov 12 20:53:40.154900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:40.156035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:40.171688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:40.187723 kernel: hv_utils: Registering HyperV Utility Driver Nov 12 20:53:40.187759 kernel: hv_vmbus: registering driver hv_utils Nov 12 20:53:40.191635 kernel: hv_utils: Heartbeat IC version 3.0 Nov 12 20:53:40.191669 kernel: hv_vmbus: registering driver hv_storvsc Nov 12 20:53:40.191684 kernel: hv_utils: Shutdown IC version 3.2 Nov 12 20:53:40.196640 kernel: hv_utils: TimeSync IC version 4.0 Nov 12 20:53:41.368523 systemd-resolved[279]: Clock change detected. Flushing caches. Nov 12 20:53:41.380676 kernel: scsi host1: storvsc_host_t Nov 12 20:53:41.381046 kernel: scsi host0: storvsc_host_t Nov 12 20:53:41.381514 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 12 20:53:41.381596 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 12 20:53:41.384337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:41.399305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:41.420881 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 12 20:53:41.421776 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:53:41.421799 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: VF slot 1 added Nov 12 20:53:41.422338 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 12 20:53:41.431645 kernel: hv_vmbus: registering driver hv_pci Nov 12 20:53:41.442618 kernel: hv_pci be0c9140-d10e-452e-bc64-f9cbd8ad5745: PCI VMBus probing: Using version 0x10004 Nov 12 20:53:41.509358 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 12 20:53:41.509508 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 12 20:53:41.509627 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 20:53:41.509751 kernel: hv_pci be0c9140-d10e-452e-bc64-f9cbd8ad5745: PCI host bridge to bus d10e:00 Nov 12 20:53:41.509865 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 12 20:53:41.509981 kernel: pci_bus d10e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 12 20:53:41.510097 kernel: pci_bus d10e:00: No busn resource found for root bus, will use [bus 00-ff] Nov 12 20:53:41.510241 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 12 20:53:41.510408 kernel: pci d10e:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 12 20:53:41.510584 kernel: pci d10e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:53:41.510720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:41.510739 kernel: pci d10e:00:02.0: enabling Extended Tags Nov 12 20:53:41.510901 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 20:53:41.511065 kernel: pci d10e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d10e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 12 20:53:41.511292 kernel: pci_bus d10e:00: busn_res: [bus 00-ff] end is updated to 00 Nov 12 20:53:41.511440 kernel: pci d10e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:53:41.453197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:41.680923 kernel: mlx5_core d10e:00:02.0: enabling device (0000 -> 0002) Nov 12 20:53:41.906688 kernel: mlx5_core d10e:00:02.0: firmware version: 14.30.1284 Nov 12 20:53:41.906895 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: VF registering: eth1 Nov 12 20:53:41.907503 kernel: mlx5_core d10e:00:02.0 eth1: joined to eth0 Nov 12 20:53:41.907694 kernel: mlx5_core d10e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 20:53:41.913178 kernel: mlx5_core d10e:00:02.0 enP53518s1: renamed from eth1 Nov 12 20:53:41.978245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 12 20:53:42.066195 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (446) Nov 12 20:53:42.080012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:53:42.119915 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 12 20:53:42.196183 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (455) Nov 12 20:53:42.209783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 12 20:53:42.217620 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 12 20:53:42.228314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:53:42.239217 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:42.245212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:43.253177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:43.253931 disk-uuid[601]: The operation has completed successfully. Nov 12 20:53:43.318625 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:53:43.318734 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:53:43.349621 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:53:43.356040 sh[687]: Success Nov 12 20:53:43.401708 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:53:43.807917 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:53:43.819693 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:53:43.824594 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:53:43.839179 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:53:43.839214 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:43.844661 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:53:43.847727 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:53:43.850325 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:53:44.287107 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:53:44.292054 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:53:44.302314 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:53:44.308899 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:53:44.319188 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:44.323887 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:44.323944 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:44.357183 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:44.368076 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:53:44.375186 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:44.382919 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:53:44.396299 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:53:44.412277 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:44.421294 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:44.439619 systemd-networkd[871]: lo: Link UP Nov 12 20:53:44.439628 systemd-networkd[871]: lo: Gained carrier Nov 12 20:53:44.441692 systemd-networkd[871]: Enumeration completed Nov 12 20:53:44.441980 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:44.443675 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:44.443679 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:44.445005 systemd[1]: Reached target network.target - Network. Nov 12 20:53:44.504175 kernel: mlx5_core d10e:00:02.0 enP53518s1: Link up Nov 12 20:53:44.533189 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: Data path switched to VF: enP53518s1 Nov 12 20:53:44.533687 systemd-networkd[871]: enP53518s1: Link UP Nov 12 20:53:44.533880 systemd-networkd[871]: eth0: Link UP Nov 12 20:53:44.534089 systemd-networkd[871]: eth0: Gained carrier Nov 12 20:53:44.534102 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:44.544355 systemd-networkd[871]: enP53518s1: Gained carrier Nov 12 20:53:44.574209 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:53:45.684568 systemd-networkd[871]: enP53518s1: Gained IPv6LL Nov 12 20:53:45.748339 systemd-networkd[871]: eth0: Gained IPv6LL Nov 12 20:53:45.924792 ignition[838]: Ignition 2.19.0 Nov 12 20:53:45.924804 ignition[838]: Stage: fetch-offline Nov 12 20:53:45.924845 ignition[838]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:45.924855 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:45.924972 ignition[838]: parsed url from cmdline: "" Nov 12 20:53:45.924977 ignition[838]: no config URL provided Nov 12 20:53:45.924984 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:45.924994 ignition[838]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:45.925000 ignition[838]: failed to fetch config: resource requires networking Nov 12 20:53:45.926875 ignition[838]: Ignition finished successfully Nov 12 20:53:45.946335 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:45.955367 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:53:45.973137 ignition[880]: Ignition 2.19.0 Nov 12 20:53:45.973148 ignition[880]: Stage: fetch Nov 12 20:53:45.973389 ignition[880]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:45.974856 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:45.976407 ignition[880]: parsed url from cmdline: "" Nov 12 20:53:45.976412 ignition[880]: no config URL provided Nov 12 20:53:45.976420 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:45.976431 ignition[880]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:45.978713 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 12 20:53:46.064153 ignition[880]: GET result: OK Nov 12 20:53:46.064364 ignition[880]: config has been read from IMDS userdata Nov 12 20:53:46.064398 ignition[880]: parsing config with SHA512: 2469cc53b60c7dfb1d0a6202d4734fb86fd60139eef017a4a1140ab166e4a6ffc060a0187622d45b33a82b6f4b72e27261eb258fa712a2c3bc4e7e47693ba14c Nov 12 20:53:46.070409 unknown[880]: fetched base config from "system" Nov 12 20:53:46.070488 unknown[880]: fetched base config from "system" Nov 12 20:53:46.070501 unknown[880]: fetched user config from "azure" Nov 12 20:53:46.077762 ignition[880]: fetch: fetch complete Nov 12 20:53:46.077773 ignition[880]: fetch: fetch passed Nov 12 20:53:46.079332 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:53:46.077827 ignition[880]: Ignition finished successfully Nov 12 20:53:46.093691 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:53:46.109341 ignition[887]: Ignition 2.19.0 Nov 12 20:53:46.109351 ignition[887]: Stage: kargs Nov 12 20:53:46.109564 ignition[887]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:46.111735 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:53:46.109577 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:46.110768 ignition[887]: kargs: kargs passed Nov 12 20:53:46.110811 ignition[887]: Ignition finished successfully Nov 12 20:53:46.131281 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:53:46.149897 ignition[895]: Ignition 2.19.0 Nov 12 20:53:46.149906 ignition[895]: Stage: disks Nov 12 20:53:46.150114 ignition[895]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:46.150125 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:46.151283 ignition[895]: disks: disks passed Nov 12 20:53:46.156258 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:53:46.151326 ignition[895]: Ignition finished successfully Nov 12 20:53:46.168042 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:46.171056 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:53:46.180193 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:46.182985 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:46.188270 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:46.201379 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:53:46.257277 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 12 20:53:46.260831 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:53:46.275273 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:53:46.365180 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:53:46.365901 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:53:46.370437 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:46.441232 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:46.445606 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:53:46.451420 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:53:46.459867 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:53:46.461178 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (914) Nov 12 20:53:46.461242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:46.479504 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:46.479532 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:46.479551 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:46.479670 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:53:46.486301 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:53:46.491520 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:46.496471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:47.483091 coreos-metadata[916]: Nov 12 20:53:47.483 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:53:47.487485 coreos-metadata[916]: Nov 12 20:53:47.485 INFO Fetch successful Nov 12 20:53:47.487485 coreos-metadata[916]: Nov 12 20:53:47.485 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:53:47.498682 coreos-metadata[916]: Nov 12 20:53:47.498 INFO Fetch successful Nov 12 20:53:47.502382 coreos-metadata[916]: Nov 12 20:53:47.502 INFO wrote hostname ci-4081.2.0-a-1543c8d709 to /sysroot/etc/hostname Nov 12 20:53:47.503951 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:53:47.663734 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:53:47.727197 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:53:47.761716 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:53:47.767634 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:53:49.171770 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:49.181268 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:53:49.187328 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:53:49.196016 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:53:49.202437 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:49.220526 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:53:49.230114 ignition[1038]: INFO : Ignition 2.19.0 Nov 12 20:53:49.230114 ignition[1038]: INFO : Stage: mount Nov 12 20:53:49.234075 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:49.234075 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:49.240648 ignition[1038]: INFO : mount: mount passed Nov 12 20:53:49.242754 ignition[1038]: INFO : Ignition finished successfully Nov 12 20:53:49.245404 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:53:49.256284 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:53:49.265338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:49.281061 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1048) Nov 12 20:53:49.281120 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:49.284345 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:49.287019 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:49.292186 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:49.293498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:49.317435 ignition[1065]: INFO : Ignition 2.19.0 Nov 12 20:53:49.317435 ignition[1065]: INFO : Stage: files Nov 12 20:53:49.321798 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:49.321798 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:49.321798 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:53:49.372840 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:53:49.372840 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:53:49.565381 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:53:49.569539 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:53:49.573309 unknown[1065]: wrote ssh authorized keys file for user: core Nov 12 20:53:49.576272 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:53:49.610411 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:53:49.615732 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:53:49.615732 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:49.615732 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:53:49.672147 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:53:49.796677 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:49.802721 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:49.807380 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:49.812049 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:49.817362 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:49.817362 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:49.826665 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:49.831221 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:49.835948 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:49.840596 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:49.845387 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:49.850244 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:49.856968 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:49.863705 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:49.869217 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:53:50.427067 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:53:50.908283 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:50.908283 ignition[1065]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: files passed Nov 12 20:53:50.918422 ignition[1065]: INFO : Ignition finished successfully Nov 12 20:53:50.919247 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:53:50.940376 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:53:50.949333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:53:51.001030 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:51.001030 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:50.963843 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:53:51.015671 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:50.963954 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:53:50.985646 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:50.991132 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:53:51.000302 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:53:51.035980 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:53:51.036101 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:53:51.042180 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:53:51.048536 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:53:51.053813 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:53:51.061495 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:53:51.074425 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:51.083310 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:53:51.094485 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:51.095711 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:51.096117 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:53:51.097001 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:53:51.097095 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:51.098443 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:53:51.098918 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:53:51.099368 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:53:51.099818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:51.100286 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:51.100743 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:53:51.101202 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:51.101654 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:53:51.102109 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:53:51.102565 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:53:51.102979 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:53:51.103103 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:51.103902 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:51.104566 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:51.104967 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:53:51.148793 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:51.201424 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:53:51.201600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:51.209813 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:53:51.209950 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:51.216713 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:53:51.216841 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:53:51.225666 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:53:51.228273 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:53:51.244412 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:53:51.248214 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:53:51.248372 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:51.257906 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:53:51.264287 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:53:51.264639 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:51.270478 ignition[1117]: INFO : Ignition 2.19.0 Nov 12 20:53:51.275907 ignition[1117]: INFO : Stage: umount Nov 12 20:53:51.275907 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:51.275907 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:51.275907 ignition[1117]: INFO : umount: umount passed Nov 12 20:53:51.275907 ignition[1117]: INFO : Ignition finished successfully Nov 12 20:53:51.276506 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:53:51.276657 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:51.289773 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:53:51.289855 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:53:51.294942 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:53:51.295017 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:53:51.302346 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:53:51.306170 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:53:51.306232 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:53:51.311628 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:53:51.311676 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:53:51.314596 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:53:51.314639 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:53:51.319914 systemd[1]: Stopped target network.target - Network. Nov 12 20:53:51.324747 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:53:51.324805 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:51.331551 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:53:51.336237 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:53:51.344401 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:51.345367 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:53:51.345841 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:53:51.348671 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:53:51.348709 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:51.349629 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:53:51.349661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:51.370233 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:53:51.373363 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:53:51.378465 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:53:51.381329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:51.384351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:53:51.390903 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:53:51.426208 systemd-networkd[871]: eth0: DHCPv6 lease lost Nov 12 20:53:51.429323 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:53:51.429433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:53:51.437927 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:53:51.440437 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:53:51.448175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:53:51.448227 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:51.459260 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:53:51.461850 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:53:51.461909 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:51.465312 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:53:51.465353 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:51.470489 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:53:51.470538 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:51.473435 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:53:51.473485 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:51.481198 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:51.508723 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:53:51.511302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:51.515299 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:53:51.515372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:51.519414 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:53:51.519446 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:51.519878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:53:51.519914 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:51.523146 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:53:51.523195 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:51.558126 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: Data path switched from VF: enP53518s1 Nov 12 20:53:51.524075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:51.524108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:51.564364 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:53:51.567337 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:53:51.570614 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:51.577355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:51.577439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:51.584742 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:53:51.584830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:53:51.590035 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:53:51.590110 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:53:51.840339 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:53:51.840489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:53:51.849354 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:53:51.854866 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:53:51.854930 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:51.864716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:53:52.023845 systemd[1]: Switching root. Nov 12 20:53:52.052058 systemd-journald[176]: Journal stopped Nov 12 20:53:39.064528 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:53:39.064555 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:39.064567 kernel: BIOS-provided physical RAM map: Nov 12 20:53:39.064574 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:53:39.064579 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 12 20:53:39.064587 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 12 20:53:39.064596 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 12 20:53:39.064604 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 12 20:53:39.064612 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 12 20:53:39.064619 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 12 20:53:39.064626 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 12 20:53:39.064635 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 12 20:53:39.064641 kernel: printk: bootconsole [earlyser0] enabled Nov 12 20:53:39.064650 kernel: NX (Execute Disable) protection: active Nov 12 20:53:39.064662 kernel: APIC: Static calls initialized Nov 12 20:53:39.064670 kernel: efi: EFI v2.7 by Microsoft Nov 12 20:53:39.064679 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Nov 12 20:53:39.064687 kernel: SMBIOS 3.1.0 present. Nov 12 20:53:39.064696 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 12 20:53:39.064703 kernel: Hypervisor detected: Microsoft Hyper-V Nov 12 20:53:39.064710 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 12 20:53:39.064720 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Nov 12 20:53:39.064727 kernel: Hyper-V: Nested features: 0x1e0101 Nov 12 20:53:39.064734 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 12 20:53:39.064745 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 12 20:53:39.064752 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:53:39.064760 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:53:39.064770 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 12 20:53:39.064777 kernel: tsc: Detected 2593.905 MHz processor Nov 12 20:53:39.064785 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:53:39.064795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:53:39.064802 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 12 20:53:39.064810 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:53:39.064821 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:53:39.064828 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 12 20:53:39.064837 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 12 20:53:39.064846 kernel: Using GB pages for direct mapping Nov 12 20:53:39.064852 kernel: Secure boot disabled Nov 12 20:53:39.064861 kernel: ACPI: Early table checksum verification disabled Nov 12 20:53:39.064870 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 12 20:53:39.064880 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064893 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064900 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 12 20:53:39.064908 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 12 20:53:39.064918 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064926 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064935 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064946 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064953 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064962 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064971 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:39.064978 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 12 20:53:39.064988 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 12 20:53:39.064997 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 12 20:53:39.065004 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 12 20:53:39.065016 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 12 20:53:39.065023 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 12 20:53:39.065031 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 12 20:53:39.065041 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 12 20:53:39.065048 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 12 20:53:39.065057 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 12 20:53:39.065067 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:53:39.065075 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:53:39.065085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 12 20:53:39.065097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 12 20:53:39.065105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 12 20:53:39.065115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 12 20:53:39.065124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 12 20:53:39.065132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 12 20:53:39.065143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 12 20:53:39.065150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 12 20:53:39.065159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 12 20:53:39.065168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 12 20:53:39.065178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 12 20:53:39.065188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 12 20:53:39.065196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 12 20:53:39.065203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 12 20:53:39.065211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 12 20:53:39.065221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 12 20:53:39.065229 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 12 20:53:39.065236 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 12 20:53:39.065243 kernel: Zone ranges: Nov 12 20:53:39.065254 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:53:39.065264 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:53:39.065273 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:53:39.065282 kernel: Movable zone start for each node Nov 12 20:53:39.065293 kernel: Early memory node ranges Nov 12 20:53:39.065302 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:53:39.065314 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 12 20:53:39.065323 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 12 20:53:39.065333 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:53:39.065345 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 12 20:53:39.065355 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:53:39.065362 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:53:39.065373 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 12 20:53:39.065380 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 12 20:53:39.065388 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 12 20:53:39.065407 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:53:39.065418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:53:39.065428 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:53:39.065440 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 12 20:53:39.065452 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:53:39.065464 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 12 20:53:39.065476 kernel: Booting paravirtualized kernel on Hyper-V Nov 12 20:53:39.065490 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:53:39.065502 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:53:39.065511 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:53:39.065525 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:53:39.065537 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:53:39.065554 kernel: Hyper-V: PV spinlocks enabled Nov 12 20:53:39.065567 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:53:39.065583 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:39.065596 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:53:39.065610 kernel: random: crng init done Nov 12 20:53:39.065625 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:53:39.065639 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:53:39.065654 kernel: Fallback order for Node 0: 0 Nov 12 20:53:39.065672 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 12 20:53:39.065697 kernel: Policy zone: Normal Nov 12 20:53:39.065716 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:53:39.065731 kernel: software IO TLB: area num 2. Nov 12 20:53:39.065747 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 310124K reserved, 0K cma-reserved) Nov 12 20:53:39.065763 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:53:39.065778 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:53:39.065794 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:53:39.065810 kernel: Dynamic Preempt: voluntary Nov 12 20:53:39.065825 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:53:39.065845 kernel: rcu: RCU event tracing is enabled. Nov 12 20:53:39.065863 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:53:39.065878 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:53:39.065892 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:53:39.065907 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:53:39.065920 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:53:39.065937 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:53:39.065952 kernel: Using NULL legacy PIC Nov 12 20:53:39.065964 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 12 20:53:39.065977 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:53:39.065990 kernel: Console: colour dummy device 80x25 Nov 12 20:53:39.066004 kernel: printk: console [tty1] enabled Nov 12 20:53:39.066019 kernel: printk: console [ttyS0] enabled Nov 12 20:53:39.066034 kernel: printk: bootconsole [earlyser0] disabled Nov 12 20:53:39.066049 kernel: ACPI: Core revision 20230628 Nov 12 20:53:39.066064 kernel: Failed to register legacy timer interrupt Nov 12 20:53:39.066082 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:53:39.066097 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 12 20:53:39.066112 kernel: Hyper-V: Using IPI hypercalls Nov 12 20:53:39.066127 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 12 20:53:39.066142 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 12 20:53:39.066157 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 12 20:53:39.066172 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 12 20:53:39.066186 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 12 20:53:39.066200 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 12 20:53:39.066217 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Nov 12 20:53:39.066231 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:53:39.066244 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:53:39.066259 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:53:39.066274 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:53:39.066289 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:53:39.066303 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:53:39.066318 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 12 20:53:39.066332 kernel: RETBleed: Vulnerable Nov 12 20:53:39.066351 kernel: Speculative Store Bypass: Vulnerable Nov 12 20:53:39.066365 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:53:39.066379 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:53:39.066392 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:53:39.066431 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:53:39.066446 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:53:39.066459 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:53:39.066472 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 12 20:53:39.066486 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 12 20:53:39.066500 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 12 20:53:39.066513 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:53:39.066529 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 12 20:53:39.066559 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 12 20:53:39.066573 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 12 20:53:39.066587 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 12 20:53:39.066600 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:53:39.066613 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:53:39.066628 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:53:39.066642 kernel: landlock: Up and running. Nov 12 20:53:39.066656 kernel: SELinux: Initializing. Nov 12 20:53:39.066670 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:53:39.066685 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:53:39.066699 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 12 20:53:39.066717 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:39.066731 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:39.066746 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:39.066762 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 12 20:53:39.066776 kernel: signal: max sigframe size: 3632 Nov 12 20:53:39.066792 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:53:39.066807 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:53:39.066822 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:53:39.066837 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:53:39.066855 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:53:39.066870 kernel: .... node #0, CPUs: #1 Nov 12 20:53:39.066886 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 12 20:53:39.066903 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:53:39.066917 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:53:39.066933 kernel: smpboot: Max logical packages: 1 Nov 12 20:53:39.066948 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 12 20:53:39.066963 kernel: devtmpfs: initialized Nov 12 20:53:39.066981 kernel: x86/mm: Memory block size: 128MB Nov 12 20:53:39.066996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 12 20:53:39.067012 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:53:39.067027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:53:39.067042 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:53:39.067057 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:53:39.067072 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:53:39.067088 kernel: audit: type=2000 audit(1731444818.028:1): state=initialized audit_enabled=0 res=1 Nov 12 20:53:39.067103 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:53:39.067121 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:53:39.067136 kernel: cpuidle: using governor menu Nov 12 20:53:39.067151 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:53:39.067166 kernel: dca service started, version 1.12.1 Nov 12 20:53:39.067181 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 12 20:53:39.067197 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:53:39.067212 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:53:39.067227 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:53:39.067242 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:53:39.067260 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:53:39.067276 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:53:39.067290 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:53:39.067305 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:53:39.067321 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:53:39.067335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:53:39.067351 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:53:39.067367 kernel: ACPI: Interpreter enabled Nov 12 20:53:39.067382 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:53:39.067416 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:53:39.067431 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:53:39.067446 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:53:39.067462 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 12 20:53:39.067477 kernel: iommu: Default domain type: Translated Nov 12 20:53:39.067492 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:53:39.067507 kernel: efivars: Registered efivars operations Nov 12 20:53:39.067521 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:53:39.067536 kernel: PCI: System does not support PCI Nov 12 20:53:39.067554 kernel: vgaarb: loaded Nov 12 20:53:39.067569 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 12 20:53:39.067584 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:53:39.067600 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:53:39.067615 kernel: pnp: PnP ACPI init Nov 12 20:53:39.067631 kernel: pnp: PnP ACPI: found 3 devices Nov 12 20:53:39.067646 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:53:39.067661 kernel: NET: Registered PF_INET protocol family Nov 12 20:53:39.067677 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:53:39.067695 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:53:39.067710 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:53:39.067725 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:53:39.067740 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:53:39.067755 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:53:39.067770 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:53:39.067785 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:53:39.067800 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:53:39.067815 kernel: NET: Registered PF_XDP protocol family Nov 12 20:53:39.067834 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:53:39.067849 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:53:39.067865 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Nov 12 20:53:39.067880 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:53:39.067895 kernel: Initialise system trusted keyrings Nov 12 20:53:39.067909 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:53:39.067924 kernel: Key type asymmetric registered Nov 12 20:53:39.067939 kernel: Asymmetric key parser 'x509' registered Nov 12 20:53:39.067953 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:53:39.067971 kernel: io scheduler mq-deadline registered Nov 12 20:53:39.067986 kernel: io scheduler kyber registered Nov 12 20:53:39.068001 kernel: io scheduler bfq registered Nov 12 20:53:39.068016 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:53:39.068031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:53:39.068045 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:53:39.068059 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:53:39.068075 kernel: i8042: PNP: No PS/2 controller found. Nov 12 20:53:39.068250 kernel: rtc_cmos 00:02: registered as rtc0 Nov 12 20:53:39.068417 kernel: rtc_cmos 00:02: setting system clock to 2024-11-12T20:53:38 UTC (1731444818) Nov 12 20:53:39.068563 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 12 20:53:39.068583 kernel: intel_pstate: CPU model not supported Nov 12 20:53:39.068598 kernel: efifb: probing for efifb Nov 12 20:53:39.068612 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 12 20:53:39.068625 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 12 20:53:39.068639 kernel: efifb: scrolling: redraw Nov 12 20:53:39.068657 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 12 20:53:39.068671 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:53:39.068685 kernel: fb0: EFI VGA frame buffer device Nov 12 20:53:39.068699 kernel: pstore: Using crash dump compression: deflate Nov 12 20:53:39.068713 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:53:39.068727 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:53:39.068742 kernel: Segment Routing with IPv6 Nov 12 20:53:39.068755 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:53:39.068769 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:53:39.068783 kernel: Key type dns_resolver registered Nov 12 20:53:39.068799 kernel: IPI shorthand broadcast: enabled Nov 12 20:53:39.068814 kernel: sched_clock: Marking stable (840002700, 48324800)->(1160748500, -272421000) Nov 12 20:53:39.068828 kernel: registered taskstats version 1 Nov 12 20:53:39.068844 kernel: Loading compiled-in X.509 certificates Nov 12 20:53:39.068860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:53:39.068874 kernel: Key type .fscrypt registered Nov 12 20:53:39.068888 kernel: Key type fscrypt-provisioning registered Nov 12 20:53:39.068902 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:53:39.068919 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:53:39.068934 kernel: ima: No architecture policies found Nov 12 20:53:39.068949 kernel: clk: Disabling unused clocks Nov 12 20:53:39.068964 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:53:39.068978 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:53:39.068993 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:53:39.069008 kernel: Run /init as init process Nov 12 20:53:39.069022 kernel: with arguments: Nov 12 20:53:39.069036 kernel: /init Nov 12 20:53:39.069053 kernel: with environment: Nov 12 20:53:39.069067 kernel: HOME=/ Nov 12 20:53:39.069081 kernel: TERM=linux Nov 12 20:53:39.069094 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:53:39.069112 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:39.069130 systemd[1]: Detected virtualization microsoft. Nov 12 20:53:39.069145 systemd[1]: Detected architecture x86-64. Nov 12 20:53:39.069160 systemd[1]: Running in initrd. Nov 12 20:53:39.069177 systemd[1]: No hostname configured, using default hostname. Nov 12 20:53:39.069192 systemd[1]: Hostname set to . Nov 12 20:53:39.069208 systemd[1]: Initializing machine ID from random generator. Nov 12 20:53:39.069223 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:53:39.069238 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:39.069254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:39.069271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:53:39.069286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:39.069304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:53:39.069320 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:53:39.069338 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:53:39.069354 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:53:39.069370 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:39.069386 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:39.069422 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:39.069441 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:39.069457 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:39.069473 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:39.069489 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:39.069504 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:39.069520 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:53:39.069536 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:53:39.069552 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:39.069568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:39.069586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:39.069602 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:39.069618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:53:39.069634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:39.069649 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:53:39.069665 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:53:39.069681 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:39.069694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:39.069744 systemd-journald[176]: Collecting audit messages is disabled. Nov 12 20:53:39.069780 systemd-journald[176]: Journal started Nov 12 20:53:39.069816 systemd-journald[176]: Runtime Journal (/run/log/journal/b9d9bd314bea40d89ae12a795b53be4d) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:53:39.076436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:39.087308 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:39.088144 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:39.094288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:39.101241 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:53:39.103643 systemd-modules-load[177]: Inserted module 'overlay' Nov 12 20:53:39.103825 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:39.122588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:39.127517 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:39.134481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:39.161420 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:53:39.164967 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:39.172267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:39.182162 kernel: Bridge firewalling registered Nov 12 20:53:39.182476 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 12 20:53:39.186535 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:53:39.194545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:39.200993 dracut-cmdline[200]: dracut-dracut-053 Nov 12 20:53:39.204640 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:39.203571 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:39.223158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:39.241132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:39.258221 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:39.265496 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:39.279611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:39.308419 kernel: SCSI subsystem initialized Nov 12 20:53:39.317926 systemd-resolved[279]: Positive Trust Anchors: Nov 12 20:53:39.317939 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:39.326617 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:53:39.317993 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:39.322932 systemd-resolved[279]: Defaulting to hostname 'linux'. Nov 12 20:53:39.346930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:39.350162 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:39.360560 kernel: iscsi: registered transport (tcp) Nov 12 20:53:39.382423 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:53:39.382471 kernel: QLogic iSCSI HBA Driver Nov 12 20:53:39.417202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:39.427518 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:53:39.458983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:53:39.459039 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:53:39.462598 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:53:39.502428 kernel: raid6: avx512x4 gen() 18342 MB/s Nov 12 20:53:39.521414 kernel: raid6: avx512x2 gen() 18377 MB/s Nov 12 20:53:39.540412 kernel: raid6: avx512x1 gen() 18333 MB/s Nov 12 20:53:39.559416 kernel: raid6: avx2x4 gen() 18312 MB/s Nov 12 20:53:39.578410 kernel: raid6: avx2x2 gen() 18279 MB/s Nov 12 20:53:39.598590 kernel: raid6: avx2x1 gen() 14058 MB/s Nov 12 20:53:39.598628 kernel: raid6: using algorithm avx512x2 gen() 18377 MB/s Nov 12 20:53:39.620064 kernel: raid6: .... xor() 30370 MB/s, rmw enabled Nov 12 20:53:39.620096 kernel: raid6: using avx512x2 recovery algorithm Nov 12 20:53:39.641418 kernel: xor: automatically using best checksumming function avx Nov 12 20:53:39.789424 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:53:39.798458 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:39.809804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:39.821472 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 12 20:53:39.825847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:39.840536 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:53:39.852053 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Nov 12 20:53:39.876190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:39.884532 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:39.922976 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:39.934592 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:53:39.970733 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:39.978103 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:39.993950 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:53:39.982106 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:39.985298 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:40.004680 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:53:40.017792 kernel: hv_vmbus: Vmbus version:5.2 Nov 12 20:53:40.026880 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:53:40.027158 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:40.038422 kernel: AES CTR mode by8 optimization enabled Nov 12 20:53:40.061263 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 12 20:53:40.061308 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 12 20:53:40.062796 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:40.064150 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:40.069669 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:40.078869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:40.079039 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:40.082082 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:40.102481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:40.118020 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 12 20:53:40.118046 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 12 20:53:40.121426 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 20:53:40.126073 kernel: hv_vmbus: registering driver hid_hyperv Nov 12 20:53:40.126616 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 12 20:53:40.126645 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 12 20:53:40.128421 kernel: hv_vmbus: registering driver hv_netvsc Nov 12 20:53:40.151419 kernel: PTP clock support registered Nov 12 20:53:40.154900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:40.156035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:40.171688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:40.187723 kernel: hv_utils: Registering HyperV Utility Driver Nov 12 20:53:40.187759 kernel: hv_vmbus: registering driver hv_utils Nov 12 20:53:40.191635 kernel: hv_utils: Heartbeat IC version 3.0 Nov 12 20:53:40.191669 kernel: hv_vmbus: registering driver hv_storvsc Nov 12 20:53:40.191684 kernel: hv_utils: Shutdown IC version 3.2 Nov 12 20:53:40.196640 kernel: hv_utils: TimeSync IC version 4.0 Nov 12 20:53:41.368523 systemd-resolved[279]: Clock change detected. Flushing caches. Nov 12 20:53:41.380676 kernel: scsi host1: storvsc_host_t Nov 12 20:53:41.381046 kernel: scsi host0: storvsc_host_t Nov 12 20:53:41.381514 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 12 20:53:41.381596 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 12 20:53:41.384337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:41.399305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:41.420881 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 12 20:53:41.421776 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:53:41.421799 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: VF slot 1 added Nov 12 20:53:41.422338 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 12 20:53:41.431645 kernel: hv_vmbus: registering driver hv_pci Nov 12 20:53:41.442618 kernel: hv_pci be0c9140-d10e-452e-bc64-f9cbd8ad5745: PCI VMBus probing: Using version 0x10004 Nov 12 20:53:41.509358 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 12 20:53:41.509508 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 12 20:53:41.509627 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 20:53:41.509751 kernel: hv_pci be0c9140-d10e-452e-bc64-f9cbd8ad5745: PCI host bridge to bus d10e:00 Nov 12 20:53:41.509865 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 12 20:53:41.509981 kernel: pci_bus d10e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 12 20:53:41.510097 kernel: pci_bus d10e:00: No busn resource found for root bus, will use [bus 00-ff] Nov 12 20:53:41.510241 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 12 20:53:41.510408 kernel: pci d10e:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 12 20:53:41.510584 kernel: pci d10e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:53:41.510720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:41.510739 kernel: pci d10e:00:02.0: enabling Extended Tags Nov 12 20:53:41.510901 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 20:53:41.511065 kernel: pci d10e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d10e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 12 20:53:41.511292 kernel: pci_bus d10e:00: busn_res: [bus 00-ff] end is updated to 00 Nov 12 20:53:41.511440 kernel: pci d10e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:53:41.453197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:41.680923 kernel: mlx5_core d10e:00:02.0: enabling device (0000 -> 0002) Nov 12 20:53:41.906688 kernel: mlx5_core d10e:00:02.0: firmware version: 14.30.1284 Nov 12 20:53:41.906895 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: VF registering: eth1 Nov 12 20:53:41.907503 kernel: mlx5_core d10e:00:02.0 eth1: joined to eth0 Nov 12 20:53:41.907694 kernel: mlx5_core d10e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 20:53:41.913178 kernel: mlx5_core d10e:00:02.0 enP53518s1: renamed from eth1 Nov 12 20:53:41.978245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 12 20:53:42.066195 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (446) Nov 12 20:53:42.080012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:53:42.119915 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 12 20:53:42.196183 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (455) Nov 12 20:53:42.209783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 12 20:53:42.217620 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 12 20:53:42.228314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:53:42.239217 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:42.245212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:43.253177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:43.253931 disk-uuid[601]: The operation has completed successfully. Nov 12 20:53:43.318625 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:53:43.318734 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:53:43.349621 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:53:43.356040 sh[687]: Success Nov 12 20:53:43.401708 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:53:43.807917 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:53:43.819693 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:53:43.824594 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:53:43.839179 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:53:43.839214 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:43.844661 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:53:43.847727 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:53:43.850325 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:53:44.287107 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:53:44.292054 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:53:44.302314 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:53:44.308899 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:53:44.319188 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:44.323887 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:44.323944 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:44.357183 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:44.368076 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:53:44.375186 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:44.382919 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:53:44.396299 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:53:44.412277 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:44.421294 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:44.439619 systemd-networkd[871]: lo: Link UP Nov 12 20:53:44.439628 systemd-networkd[871]: lo: Gained carrier Nov 12 20:53:44.441692 systemd-networkd[871]: Enumeration completed Nov 12 20:53:44.441980 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:44.443675 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:44.443679 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:44.445005 systemd[1]: Reached target network.target - Network. Nov 12 20:53:44.504175 kernel: mlx5_core d10e:00:02.0 enP53518s1: Link up Nov 12 20:53:44.533189 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: Data path switched to VF: enP53518s1 Nov 12 20:53:44.533687 systemd-networkd[871]: enP53518s1: Link UP Nov 12 20:53:44.533880 systemd-networkd[871]: eth0: Link UP Nov 12 20:53:44.534089 systemd-networkd[871]: eth0: Gained carrier Nov 12 20:53:44.534102 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:44.544355 systemd-networkd[871]: enP53518s1: Gained carrier Nov 12 20:53:44.574209 systemd-networkd[871]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:53:45.684568 systemd-networkd[871]: enP53518s1: Gained IPv6LL Nov 12 20:53:45.748339 systemd-networkd[871]: eth0: Gained IPv6LL Nov 12 20:53:45.924792 ignition[838]: Ignition 2.19.0 Nov 12 20:53:45.924804 ignition[838]: Stage: fetch-offline Nov 12 20:53:45.924845 ignition[838]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:45.924855 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:45.924972 ignition[838]: parsed url from cmdline: "" Nov 12 20:53:45.924977 ignition[838]: no config URL provided Nov 12 20:53:45.924984 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:45.924994 ignition[838]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:45.925000 ignition[838]: failed to fetch config: resource requires networking Nov 12 20:53:45.926875 ignition[838]: Ignition finished successfully Nov 12 20:53:45.946335 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:45.955367 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:53:45.973137 ignition[880]: Ignition 2.19.0 Nov 12 20:53:45.973148 ignition[880]: Stage: fetch Nov 12 20:53:45.973389 ignition[880]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:45.974856 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:45.976407 ignition[880]: parsed url from cmdline: "" Nov 12 20:53:45.976412 ignition[880]: no config URL provided Nov 12 20:53:45.976420 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:45.976431 ignition[880]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:45.978713 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 12 20:53:46.064153 ignition[880]: GET result: OK Nov 12 20:53:46.064364 ignition[880]: config has been read from IMDS userdata Nov 12 20:53:46.064398 ignition[880]: parsing config with SHA512: 2469cc53b60c7dfb1d0a6202d4734fb86fd60139eef017a4a1140ab166e4a6ffc060a0187622d45b33a82b6f4b72e27261eb258fa712a2c3bc4e7e47693ba14c Nov 12 20:53:46.070409 unknown[880]: fetched base config from "system" Nov 12 20:53:46.070488 unknown[880]: fetched base config from "system" Nov 12 20:53:46.070501 unknown[880]: fetched user config from "azure" Nov 12 20:53:46.077762 ignition[880]: fetch: fetch complete Nov 12 20:53:46.077773 ignition[880]: fetch: fetch passed Nov 12 20:53:46.079332 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:53:46.077827 ignition[880]: Ignition finished successfully Nov 12 20:53:46.093691 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:53:46.109341 ignition[887]: Ignition 2.19.0 Nov 12 20:53:46.109351 ignition[887]: Stage: kargs Nov 12 20:53:46.109564 ignition[887]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:46.111735 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:53:46.109577 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:46.110768 ignition[887]: kargs: kargs passed Nov 12 20:53:46.110811 ignition[887]: Ignition finished successfully Nov 12 20:53:46.131281 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:53:46.149897 ignition[895]: Ignition 2.19.0 Nov 12 20:53:46.149906 ignition[895]: Stage: disks Nov 12 20:53:46.150114 ignition[895]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:46.150125 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:46.151283 ignition[895]: disks: disks passed Nov 12 20:53:46.156258 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:53:46.151326 ignition[895]: Ignition finished successfully Nov 12 20:53:46.168042 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:46.171056 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:53:46.180193 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:46.182985 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:46.188270 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:46.201379 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:53:46.257277 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 12 20:53:46.260831 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:53:46.275273 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:53:46.365180 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:53:46.365901 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:53:46.370437 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:46.441232 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:46.445606 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:53:46.451420 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:53:46.459867 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:53:46.461178 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (914) Nov 12 20:53:46.461242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:46.479504 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:46.479532 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:46.479551 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:46.479670 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:53:46.486301 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:53:46.491520 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:46.496471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:47.483091 coreos-metadata[916]: Nov 12 20:53:47.483 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:53:47.487485 coreos-metadata[916]: Nov 12 20:53:47.485 INFO Fetch successful Nov 12 20:53:47.487485 coreos-metadata[916]: Nov 12 20:53:47.485 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:53:47.498682 coreos-metadata[916]: Nov 12 20:53:47.498 INFO Fetch successful Nov 12 20:53:47.502382 coreos-metadata[916]: Nov 12 20:53:47.502 INFO wrote hostname ci-4081.2.0-a-1543c8d709 to /sysroot/etc/hostname Nov 12 20:53:47.503951 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:53:47.663734 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:53:47.727197 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:53:47.761716 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:53:47.767634 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:53:49.171770 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:49.181268 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:53:49.187328 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:53:49.196016 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:53:49.202437 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:49.220526 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:53:49.230114 ignition[1038]: INFO : Ignition 2.19.0 Nov 12 20:53:49.230114 ignition[1038]: INFO : Stage: mount Nov 12 20:53:49.234075 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:49.234075 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:49.240648 ignition[1038]: INFO : mount: mount passed Nov 12 20:53:49.242754 ignition[1038]: INFO : Ignition finished successfully Nov 12 20:53:49.245404 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:53:49.256284 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:53:49.265338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:49.281061 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1048) Nov 12 20:53:49.281120 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:49.284345 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:49.287019 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:49.292186 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:49.293498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:49.317435 ignition[1065]: INFO : Ignition 2.19.0 Nov 12 20:53:49.317435 ignition[1065]: INFO : Stage: files Nov 12 20:53:49.321798 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:49.321798 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:49.321798 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:53:49.372840 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:53:49.372840 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:53:49.565381 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:53:49.569539 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:53:49.573309 unknown[1065]: wrote ssh authorized keys file for user: core Nov 12 20:53:49.576272 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:53:49.610411 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:53:49.615732 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:53:49.615732 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:49.615732 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:53:49.672147 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:53:49.796677 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:49.802721 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:49.807380 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:49.812049 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:49.817362 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:49.817362 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:49.826665 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:49.831221 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:49.835948 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:49.840596 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:49.845387 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:49.850244 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:49.856968 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:49.863705 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:49.869217 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:53:50.427067 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:53:50.908283 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:50.908283 ignition[1065]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:50.918422 ignition[1065]: INFO : files: files passed Nov 12 20:53:50.918422 ignition[1065]: INFO : Ignition finished successfully Nov 12 20:53:50.919247 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:53:50.940376 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:53:50.949333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:53:51.001030 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:51.001030 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:50.963843 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:53:51.015671 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:50.963954 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:53:50.985646 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:50.991132 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:53:51.000302 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:53:51.035980 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:53:51.036101 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:53:51.042180 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:53:51.048536 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:53:51.053813 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:53:51.061495 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:53:51.074425 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:51.083310 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:53:51.094485 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:51.095711 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:51.096117 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:53:51.097001 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:53:51.097095 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:51.098443 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:53:51.098918 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:53:51.099368 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:53:51.099818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:51.100286 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:51.100743 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:53:51.101202 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:51.101654 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:53:51.102109 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:53:51.102565 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:53:51.102979 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:53:51.103103 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:51.103902 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:51.104566 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:51.104967 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:53:51.148793 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:51.201424 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:53:51.201600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:51.209813 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:53:51.209950 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:51.216713 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:53:51.216841 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:53:51.225666 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:53:51.228273 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:53:51.244412 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:53:51.248214 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:53:51.248372 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:51.257906 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:53:51.264287 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:53:51.264639 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:51.270478 ignition[1117]: INFO : Ignition 2.19.0 Nov 12 20:53:51.275907 ignition[1117]: INFO : Stage: umount Nov 12 20:53:51.275907 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:51.275907 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:51.275907 ignition[1117]: INFO : umount: umount passed Nov 12 20:53:51.275907 ignition[1117]: INFO : Ignition finished successfully Nov 12 20:53:51.276506 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:53:51.276657 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:51.289773 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:53:51.289855 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:53:51.294942 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:53:51.295017 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:53:51.302346 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:53:51.306170 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:53:51.306232 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:53:51.311628 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:53:51.311676 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:53:51.314596 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:53:51.314639 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:53:51.319914 systemd[1]: Stopped target network.target - Network. Nov 12 20:53:51.324747 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:53:51.324805 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:51.331551 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:53:51.336237 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:53:51.344401 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:51.345367 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:53:51.345841 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:53:51.348671 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:53:51.348709 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:51.349629 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:53:51.349661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:51.370233 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:53:51.373363 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:53:51.378465 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:53:51.381329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:51.384351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:53:51.390903 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:53:51.426208 systemd-networkd[871]: eth0: DHCPv6 lease lost Nov 12 20:53:51.429323 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:53:51.429433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:53:51.437927 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:53:51.440437 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:53:51.448175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:53:51.448227 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:51.459260 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:53:51.461850 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:53:51.461909 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:51.465312 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:53:51.465353 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:51.470489 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:53:51.470538 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:51.473435 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:53:51.473485 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:51.481198 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:51.508723 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:53:51.511302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:51.515299 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:53:51.515372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:51.519414 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:53:51.519446 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:51.519878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:53:51.519914 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:51.523146 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:53:51.523195 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:51.558126 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: Data path switched from VF: enP53518s1 Nov 12 20:53:51.524075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:51.524108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:51.564364 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:53:51.567337 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:53:51.570614 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:51.577355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:51.577439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:51.584742 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:53:51.584830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:53:51.590035 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:53:51.590110 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:53:51.840339 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:53:51.840489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:53:51.849354 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:53:51.854866 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:53:51.854930 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:51.864716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:53:52.023845 systemd[1]: Switching root. Nov 12 20:53:52.052058 systemd-journald[176]: Journal stopped Nov 12 20:53:56.993614 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Nov 12 20:53:56.993669 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:53:56.993689 kernel: SELinux: policy capability open_perms=1 Nov 12 20:53:56.993705 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:53:56.993721 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:53:56.993736 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:53:56.993754 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:53:56.993775 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:53:56.993798 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:53:56.993814 kernel: audit: type=1403 audit(1731444833.810:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:53:56.993830 systemd[1]: Successfully loaded SELinux policy in 76.982ms. Nov 12 20:53:56.993851 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.737ms. Nov 12 20:53:56.993869 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:56.993888 systemd[1]: Detected virtualization microsoft. Nov 12 20:53:56.993910 systemd[1]: Detected architecture x86-64. Nov 12 20:53:56.993930 systemd[1]: Detected first boot. Nov 12 20:53:56.993949 systemd[1]: Hostname set to . Nov 12 20:53:56.993969 systemd[1]: Initializing machine ID from random generator. Nov 12 20:53:56.993987 zram_generator::config[1176]: No configuration found. Nov 12 20:53:56.994013 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:53:56.994030 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:53:56.994051 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 12 20:53:56.994069 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:53:56.994090 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:53:56.994109 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:53:56.994129 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:53:56.994152 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:53:57.008333 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:53:57.008357 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:53:57.008373 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:53:57.008387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:57.008403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:57.008419 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:53:57.008441 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:53:57.008465 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:53:57.008484 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:57.008499 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:53:57.008516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:57.008533 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:53:57.008552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:57.008575 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:57.008592 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:57.008613 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:57.008630 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:53:57.008650 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:53:57.008667 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:53:57.008683 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:53:57.008701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:57.008717 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:57.008737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:57.008754 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:53:57.008772 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:53:57.008789 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:53:57.008807 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:53:57.008827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:57.008845 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:53:57.008863 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:53:57.008881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:53:57.008898 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:53:57.008916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:57.008934 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:57.008951 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:53:57.008972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:57.008990 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:57.009007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:57.009027 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:53:57.009045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:57.009063 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:53:57.009081 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 20:53:57.009099 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 20:53:57.009120 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:57.009139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:57.013718 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:53:57.013763 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:53:57.013788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:57.013814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:57.013836 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:53:57.013858 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:53:57.013883 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:53:57.013905 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:53:57.013957 systemd-journald[1267]: Collecting audit messages is disabled. Nov 12 20:53:57.014000 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:53:57.014023 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:53:57.014045 kernel: loop: module loaded Nov 12 20:53:57.014063 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:57.014085 systemd-journald[1267]: Journal started Nov 12 20:53:57.014126 systemd-journald[1267]: Runtime Journal (/run/log/journal/e42747e8372d4657aa28aee203a5d1eb) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:53:57.020085 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:57.025861 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:53:57.026382 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:53:57.030389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:57.030713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:57.034464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:57.034765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:57.038823 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:57.039129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:57.043032 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:57.046732 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:53:57.050881 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:53:57.060615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:57.068026 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:53:57.077360 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:53:57.080689 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:53:57.208186 kernel: fuse: init (API version 7.39) Nov 12 20:53:57.304736 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:53:57.312473 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:53:57.317405 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:57.325285 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:53:57.331679 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:57.340297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:57.349314 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:57.365319 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:53:57.373939 kernel: ACPI: bus type drm_connector registered Nov 12 20:53:57.377260 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:53:57.377463 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:53:57.381821 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:57.382030 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:57.393614 systemd-journald[1267]: Time spent on flushing to /var/log/journal/e42747e8372d4657aa28aee203a5d1eb is 27.723ms for 945 entries. Nov 12 20:53:57.393614 systemd-journald[1267]: System Journal (/var/log/journal/e42747e8372d4657aa28aee203a5d1eb) is 8.0M, max 2.6G, 2.6G free. Nov 12 20:53:58.239079 systemd-journald[1267]: Received client request to flush runtime journal. Nov 12 20:53:57.397387 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:53:57.411241 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:53:57.418268 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:53:57.421665 udevadm[1322]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:53:57.688038 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:53:57.697595 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:53:57.702929 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:53:57.708869 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 12 20:53:57.708890 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 12 20:53:57.712408 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:57.728059 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:57.745305 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:53:58.240833 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:53:58.280615 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:53:58.290326 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:58.307767 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Nov 12 20:53:58.307793 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Nov 12 20:53:58.313141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:59.934929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:53:59.947371 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:59.967978 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Nov 12 20:54:00.346654 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:00.358001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:54:00.392347 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:54:00.430878 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 12 20:54:00.493204 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Nov 12 20:54:00.515180 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1373) Nov 12 20:54:00.685195 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:54:00.691411 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:54:00.697184 kernel: hv_vmbus: registering driver hv_balloon Nov 12 20:54:00.701179 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 12 20:54:00.751403 kernel: hv_vmbus: registering driver hyperv_fb Nov 12 20:54:00.751464 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 12 20:54:00.757176 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 12 20:54:00.759806 kernel: Console: switching to colour dummy device 80x25 Nov 12 20:54:00.763822 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:54:00.799499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:00.816769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:00.817713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:00.837696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:01.237832 systemd-networkd[1366]: lo: Link UP Nov 12 20:54:01.237842 systemd-networkd[1366]: lo: Gained carrier Nov 12 20:54:01.240026 systemd-networkd[1366]: Enumeration completed Nov 12 20:54:01.240288 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:54:01.243709 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:01.243721 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:54:01.245407 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:54:01.293187 kernel: mlx5_core d10e:00:02.0 enP53518s1: Link up Nov 12 20:54:01.311448 kernel: hv_netvsc 000d3ab6-07e8-000d-3ab6-07e8000d3ab6 eth0: Data path switched to VF: enP53518s1 Nov 12 20:54:01.311063 systemd-networkd[1366]: enP53518s1: Link UP Nov 12 20:54:01.311220 systemd-networkd[1366]: eth0: Link UP Nov 12 20:54:01.311225 systemd-networkd[1366]: eth0: Gained carrier Nov 12 20:54:01.311246 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:01.317946 systemd-networkd[1366]: enP53518s1: Gained carrier Nov 12 20:54:01.357197 systemd-networkd[1366]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:54:01.522195 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1379) Nov 12 20:54:01.564361 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 12 20:54:01.637097 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:54:01.907718 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:54:01.914760 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:54:01.957054 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:54:02.045802 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:54:02.047131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:02.053474 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:54:02.059549 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:54:02.089099 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:54:02.093861 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:54:02.095148 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:54:02.095283 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:54:02.095650 systemd[1]: Reached target machines.target - Containers. Nov 12 20:54:02.097606 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:54:02.107754 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:54:02.111792 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:54:02.114608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:02.115606 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:54:02.119311 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:54:02.123509 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:54:02.138285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:54:02.289514 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:54:02.301910 kernel: loop0: detected capacity change from 0 to 211296 Nov 12 20:54:02.340181 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:54:02.381217 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 20:54:02.388443 systemd-networkd[1366]: eth0: Gained IPv6LL Nov 12 20:54:02.396176 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:54:02.990601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:03.028476 systemd-networkd[1366]: enP53518s1: Gained IPv6LL Nov 12 20:54:04.993194 kernel: loop2: detected capacity change from 0 to 31056 Nov 12 20:54:06.891192 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 20:54:07.281563 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:54:07.282754 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:54:08.161187 kernel: loop4: detected capacity change from 0 to 211296 Nov 12 20:54:08.168175 kernel: loop5: detected capacity change from 0 to 140768 Nov 12 20:54:08.179184 kernel: loop6: detected capacity change from 0 to 31056 Nov 12 20:54:08.184178 kernel: loop7: detected capacity change from 0 to 142488 Nov 12 20:54:08.191565 (sd-merge)[1483]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 12 20:54:08.192085 (sd-merge)[1483]: Merged extensions into '/usr'. Nov 12 20:54:08.195804 systemd[1]: Reloading requested from client PID 1463 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:54:08.195819 systemd[1]: Reloading... Nov 12 20:54:08.264234 zram_generator::config[1513]: No configuration found. Nov 12 20:54:08.466076 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:08.540012 systemd[1]: Reloading finished in 343 ms. Nov 12 20:54:08.555591 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:54:08.565323 systemd[1]: Starting ensure-sysext.service... Nov 12 20:54:08.570310 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:54:08.582361 systemd[1]: Reloading requested from client PID 1574 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:54:08.582382 systemd[1]: Reloading... Nov 12 20:54:08.593454 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:54:08.594395 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:54:08.595741 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:54:08.596325 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Nov 12 20:54:08.596508 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Nov 12 20:54:08.600561 systemd-tmpfiles[1575]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:54:08.600674 systemd-tmpfiles[1575]: Skipping /boot Nov 12 20:54:08.612865 systemd-tmpfiles[1575]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:54:08.612879 systemd-tmpfiles[1575]: Skipping /boot Nov 12 20:54:08.668247 zram_generator::config[1605]: No configuration found. Nov 12 20:54:08.792008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:08.865434 systemd[1]: Reloading finished in 282 ms. Nov 12 20:54:08.879738 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:08.895352 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:54:08.902847 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:54:08.909322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:54:08.924146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:54:08.931301 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:54:08.948922 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:54:08.961464 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:08.961828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:08.969479 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:54:08.979434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:54:08.993437 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:54:09.009443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:54:09.012671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:09.012906 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:54:09.016010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:09.017212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:54:09.017460 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:54:09.020912 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:54:09.021077 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:54:09.024223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:54:09.024380 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:54:09.028027 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:54:09.028357 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:54:09.035030 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:54:09.035989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:54:09.036521 systemd[1]: Finished ensure-sysext.service. Nov 12 20:54:09.290726 systemd-resolved[1675]: Positive Trust Anchors: Nov 12 20:54:09.290741 systemd-resolved[1675]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:54:09.290784 systemd-resolved[1675]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:54:09.346322 systemd-resolved[1675]: Using system hostname 'ci-4081.2.0-a-1543c8d709'. Nov 12 20:54:09.348888 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:54:09.352311 systemd[1]: Reached target network.target - Network. Nov 12 20:54:09.354637 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:54:09.357331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:10.137674 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:54:10.263305 augenrules[1709]: No rules Nov 12 20:54:10.265430 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:54:11.083905 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:54:11.088224 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:54:12.749032 ldconfig[1460]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:54:12.759226 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:54:12.768408 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:54:12.777672 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:54:12.781315 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:54:12.784389 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:54:12.787842 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:54:12.791200 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:54:12.794250 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:54:12.797475 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:54:12.801411 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:54:12.801448 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:54:12.804065 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:54:12.807088 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:54:12.811280 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:54:12.815057 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:54:12.819035 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:54:12.821984 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:54:12.824542 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:54:12.827106 systemd[1]: System is tainted: cgroupsv1 Nov 12 20:54:12.827157 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:54:12.827197 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:54:12.878282 systemd[1]: Starting chronyd.service - NTP client/server... Nov 12 20:54:12.884290 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:54:12.896315 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:54:12.901376 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:54:12.917882 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:54:12.924893 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:54:12.927517 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:54:12.927566 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 12 20:54:12.937185 jq[1732]: false Nov 12 20:54:12.940341 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 12 20:54:12.943302 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 12 20:54:12.949962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:12.957130 (chronyd)[1727]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 12 20:54:12.963393 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:54:12.966762 KVP[1736]: KVP starting; pid is:1736 Nov 12 20:54:12.976342 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:54:12.978273 chronyd[1744]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 12 20:54:12.984075 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:54:12.989767 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:54:12.990532 chronyd[1744]: Timezone right/UTC failed leap second check, ignoring Nov 12 20:54:12.993392 chronyd[1744]: Loaded seccomp filter (level 2) Nov 12 20:54:13.000534 dbus-daemon[1730]: [system] SELinux support is enabled Nov 12 20:54:13.010232 kernel: hv_utils: KVP IC version 4.0 Nov 12 20:54:13.010224 KVP[1736]: KVP LIC Version: 3.1 Nov 12 20:54:13.018307 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:54:13.027458 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:54:13.030895 extend-filesystems[1735]: Found loop4 Nov 12 20:54:13.035381 extend-filesystems[1735]: Found loop5 Nov 12 20:54:13.035381 extend-filesystems[1735]: Found loop6 Nov 12 20:54:13.035381 extend-filesystems[1735]: Found loop7 Nov 12 20:54:13.035381 extend-filesystems[1735]: Found sda Nov 12 20:54:13.035381 extend-filesystems[1735]: Found sda1 Nov 12 20:54:13.035381 extend-filesystems[1735]: Found sda2 Nov 12 20:54:13.033859 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:54:13.056512 extend-filesystems[1735]: Found sda3 Nov 12 20:54:13.056512 extend-filesystems[1735]: Found usr Nov 12 20:54:13.056512 extend-filesystems[1735]: Found sda4 Nov 12 20:54:13.056512 extend-filesystems[1735]: Found sda6 Nov 12 20:54:13.056512 extend-filesystems[1735]: Found sda7 Nov 12 20:54:13.056512 extend-filesystems[1735]: Found sda9 Nov 12 20:54:13.056512 extend-filesystems[1735]: Checking size of /dev/sda9 Nov 12 20:54:13.042335 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:54:13.061220 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:54:13.070979 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:54:13.095224 jq[1765]: true Nov 12 20:54:13.088764 systemd[1]: Started chronyd.service - NTP client/server. Nov 12 20:54:13.098713 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:54:13.099034 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:54:13.106311 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:54:13.106585 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:54:13.129779 coreos-metadata[1729]: Nov 12 20:54:13.119 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:54:13.129779 coreos-metadata[1729]: Nov 12 20:54:13.119 INFO Fetch successful Nov 12 20:54:13.129779 coreos-metadata[1729]: Nov 12 20:54:13.119 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 12 20:54:13.129779 coreos-metadata[1729]: Nov 12 20:54:13.119 INFO Fetch successful Nov 12 20:54:13.129779 coreos-metadata[1729]: Nov 12 20:54:13.119 INFO Fetching http://168.63.129.16/machine/984e652b-a93b-488e-80fc-7647f2c770f8/a737bdd9%2D762f%2D436f%2D81c1%2D1d03628408ba.%5Fci%2D4081.2.0%2Da%2D1543c8d709?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 12 20:54:13.129779 coreos-metadata[1729]: Nov 12 20:54:13.119 INFO Fetch successful Nov 12 20:54:13.129779 coreos-metadata[1729]: Nov 12 20:54:13.119 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:54:13.121062 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:54:13.147599 coreos-metadata[1729]: Nov 12 20:54:13.141 INFO Fetch successful Nov 12 20:54:13.147658 extend-filesystems[1735]: Old size kept for /dev/sda9 Nov 12 20:54:13.147658 extend-filesystems[1735]: Found sr0 Nov 12 20:54:13.135510 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:54:13.135776 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:54:13.143424 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:54:13.143703 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:54:13.227400 update_engine[1763]: I20241112 20:54:13.206599 1763 main.cc:92] Flatcar Update Engine starting Nov 12 20:54:13.210750 (ntainerd)[1782]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:54:13.216788 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:54:13.216821 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:54:13.221474 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:54:13.221493 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:54:13.236272 tar[1779]: linux-amd64/helm Nov 12 20:54:13.249501 update_engine[1763]: I20241112 20:54:13.249336 1763 update_check_scheduler.cc:74] Next update check in 7m26s Nov 12 20:54:13.253417 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:54:13.266772 jq[1781]: true Nov 12 20:54:13.266079 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:54:13.275694 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:54:13.305785 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:54:13.317417 systemd-logind[1761]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 12 20:54:13.323757 systemd-logind[1761]: New seat seat0. Nov 12 20:54:13.334856 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:54:13.343057 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:54:13.376202 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1819) Nov 12 20:54:13.454135 bash[1846]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:54:13.460826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:54:13.475633 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:54:13.583992 sshd_keygen[1773]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:54:13.644190 locksmithd[1804]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:54:13.651678 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:54:13.667470 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:54:13.678731 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 12 20:54:13.708689 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:54:13.708974 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:54:13.732490 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:54:13.771302 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 12 20:54:13.777787 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:54:13.796465 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:54:13.802023 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:54:13.809779 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:54:14.068271 containerd[1782]: time="2024-11-12T20:54:14.068040200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:54:14.094835 tar[1779]: linux-amd64/LICENSE Nov 12 20:54:14.094835 tar[1779]: linux-amd64/README.md Nov 12 20:54:14.119204 containerd[1782]: time="2024-11-12T20:54:14.119002600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:14.125492 containerd[1782]: time="2024-11-12T20:54:14.125453500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:14.126267 containerd[1782]: time="2024-11-12T20:54:14.125539000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:54:14.126267 containerd[1782]: time="2024-11-12T20:54:14.125562200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:54:14.126267 containerd[1782]: time="2024-11-12T20:54:14.125712300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:54:14.126267 containerd[1782]: time="2024-11-12T20:54:14.125732400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:14.126267 containerd[1782]: time="2024-11-12T20:54:14.125800000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:14.126575 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.128913600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.129150500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.129182200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.129196200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.129205500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.129277800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.129450100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.129615400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:14.129694 containerd[1782]: time="2024-11-12T20:54:14.129636100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:54:14.130016 containerd[1782]: time="2024-11-12T20:54:14.129740500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:54:14.131645 containerd[1782]: time="2024-11-12T20:54:14.130801800Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:54:14.426835 containerd[1782]: time="2024-11-12T20:54:14.426729000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:54:14.427118 containerd[1782]: time="2024-11-12T20:54:14.427043000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:54:14.427338 containerd[1782]: time="2024-11-12T20:54:14.427309100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:54:14.427411 containerd[1782]: time="2024-11-12T20:54:14.427343000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:54:14.427411 containerd[1782]: time="2024-11-12T20:54:14.427362800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.427536100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.427987700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428128200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428149800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428189200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428210000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428228400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428246600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428265800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428285000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428302600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428320400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428338300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:54:14.428491 containerd[1782]: time="2024-11-12T20:54:14.428364800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428390800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428410800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428429500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428448400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428465900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428481200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428542200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428561100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428583800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428601800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428618700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428637700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428661800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428702400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.428999 containerd[1782]: time="2024-11-12T20:54:14.428722100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428737900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428791300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428815500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428832100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428851000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428865500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428881500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428908200Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:54:14.429537 containerd[1782]: time="2024-11-12T20:54:14.428930500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:54:14.429837 containerd[1782]: time="2024-11-12T20:54:14.429329900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:54:14.429837 containerd[1782]: time="2024-11-12T20:54:14.429420300Z" level=info msg="Connect containerd service" Nov 12 20:54:14.429837 containerd[1782]: time="2024-11-12T20:54:14.429478400Z" level=info msg="using legacy CRI server" Nov 12 20:54:14.429837 containerd[1782]: time="2024-11-12T20:54:14.429489300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:54:14.429837 containerd[1782]: time="2024-11-12T20:54:14.429657600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.430339600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.430717900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.430798100Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.431005600Z" level=info msg="Start subscribing containerd event" Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.431062100Z" level=info msg="Start recovering state" Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.431141200Z" level=info msg="Start event monitor" Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.431153200Z" level=info msg="Start snapshots syncer" Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.431183000Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.431194400Z" level=info msg="Start streaming server" Nov 12 20:54:14.432200 containerd[1782]: time="2024-11-12T20:54:14.431275700Z" level=info msg="containerd successfully booted in 0.364396s" Nov 12 20:54:14.433271 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:54:14.683340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:14.687856 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:54:14.690881 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:14.691332 systemd[1]: Startup finished in 482ms (firmware) + 49.988s (loader) + 14.844s (kernel) + 20.956s (userspace) = 1min 26.271s. Nov 12 20:54:15.302912 kubelet[1922]: E1112 20:54:15.302838 1922 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:15.305432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:15.305737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:15.345455 login[1903]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 20:54:15.349519 login[1904]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 20:54:15.356818 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:54:15.361431 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:54:15.365656 systemd-logind[1761]: New session 1 of user core. Nov 12 20:54:15.371588 systemd-logind[1761]: New session 2 of user core. Nov 12 20:54:15.381308 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:54:15.390465 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:54:15.396881 (systemd)[1939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:54:15.669390 systemd[1939]: Queued start job for default target default.target. Nov 12 20:54:15.669768 systemd[1939]: Created slice app.slice - User Application Slice. Nov 12 20:54:15.669792 systemd[1939]: Reached target paths.target - Paths. Nov 12 20:54:15.669810 systemd[1939]: Reached target timers.target - Timers. Nov 12 20:54:15.679428 systemd[1939]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:54:15.686398 systemd[1939]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:54:15.687368 systemd[1939]: Reached target sockets.target - Sockets. Nov 12 20:54:15.687393 systemd[1939]: Reached target basic.target - Basic System. Nov 12 20:54:15.687444 systemd[1939]: Reached target default.target - Main User Target. Nov 12 20:54:15.687482 systemd[1939]: Startup finished in 285ms. Nov 12 20:54:15.687610 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:54:15.695541 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:54:15.696441 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:54:16.507307 waagent[1896]: 2024-11-12T20:54:16.507210Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.508668Z INFO Daemon Daemon OS: flatcar 4081.2.0 Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.509137Z INFO Daemon Daemon Python: 3.11.9 Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.510347Z INFO Daemon Daemon Run daemon Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.511145Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.0' Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.511990Z INFO Daemon Daemon Using waagent for provisioning Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.513029Z INFO Daemon Daemon Activate resource disk Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.513937Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.518368Z INFO Daemon Daemon Found device: None Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.519107Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.519570Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.521310Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 12 20:54:16.544242 waagent[1896]: 2024-11-12T20:54:16.521519Z INFO Daemon Daemon Running default provisioning handler Nov 12 20:54:16.547390 waagent[1896]: 2024-11-12T20:54:16.547266Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 12 20:54:16.554255 waagent[1896]: 2024-11-12T20:54:16.554210Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 12 20:54:16.559088 waagent[1896]: 2024-11-12T20:54:16.559038Z INFO Daemon Daemon cloud-init is enabled: False Nov 12 20:54:16.563574 waagent[1896]: 2024-11-12T20:54:16.560239Z INFO Daemon Daemon Copying ovf-env.xml Nov 12 20:54:16.598860 waagent[1896]: 2024-11-12T20:54:16.596334Z INFO Daemon Daemon Successfully mounted dvd Nov 12 20:54:16.609643 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 12 20:54:16.611867 waagent[1896]: 2024-11-12T20:54:16.611818Z INFO Daemon Daemon Detect protocol endpoint Nov 12 20:54:16.626807 waagent[1896]: 2024-11-12T20:54:16.612991Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 12 20:54:16.626807 waagent[1896]: 2024-11-12T20:54:16.613926Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 12 20:54:16.626807 waagent[1896]: 2024-11-12T20:54:16.614810Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 12 20:54:16.626807 waagent[1896]: 2024-11-12T20:54:16.615840Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 12 20:54:16.626807 waagent[1896]: 2024-11-12T20:54:16.616651Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 12 20:54:16.630175 waagent[1896]: 2024-11-12T20:54:16.630117Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 12 20:54:16.638171 waagent[1896]: 2024-11-12T20:54:16.631582Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 12 20:54:16.638171 waagent[1896]: 2024-11-12T20:54:16.632237Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 12 20:54:16.971901 waagent[1896]: 2024-11-12T20:54:16.971756Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 12 20:54:16.978508 waagent[1896]: 2024-11-12T20:54:16.973181Z INFO Daemon Daemon Forcing an update of the goal state. Nov 12 20:54:16.981678 waagent[1896]: 2024-11-12T20:54:16.981627Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 12 20:54:17.026212 waagent[1896]: 2024-11-12T20:54:17.026132Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Nov 12 20:54:17.042740 waagent[1896]: 2024-11-12T20:54:17.027752Z INFO Daemon Nov 12 20:54:17.042740 waagent[1896]: 2024-11-12T20:54:17.029686Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 69b0a8a9-aa62-450a-be8c-3d4a09ce3f15 eTag: 1289955219394523136 source: Fabric] Nov 12 20:54:17.042740 waagent[1896]: 2024-11-12T20:54:17.031394Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 12 20:54:17.042740 waagent[1896]: 2024-11-12T20:54:17.032523Z INFO Daemon Nov 12 20:54:17.042740 waagent[1896]: 2024-11-12T20:54:17.033616Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 12 20:54:17.045620 waagent[1896]: 2024-11-12T20:54:17.045579Z INFO Daemon Daemon Downloading artifacts profile blob Nov 12 20:54:17.115879 waagent[1896]: 2024-11-12T20:54:17.115813Z INFO Daemon Downloaded certificate {'thumbprint': 'BE3074797D975ECA7F230CE73744118CDC25FB33', 'hasPrivateKey': False} Nov 12 20:54:17.121267 waagent[1896]: 2024-11-12T20:54:17.121213Z INFO Daemon Downloaded certificate {'thumbprint': '0084BA97CA7AFA5187FC7EC0F0B6EBC0FC488211', 'hasPrivateKey': True} Nov 12 20:54:17.128261 waagent[1896]: 2024-11-12T20:54:17.123326Z INFO Daemon Fetch goal state completed Nov 12 20:54:17.135001 waagent[1896]: 2024-11-12T20:54:17.134954Z INFO Daemon Daemon Starting provisioning Nov 12 20:54:17.141981 waagent[1896]: 2024-11-12T20:54:17.136199Z INFO Daemon Daemon Handle ovf-env.xml. Nov 12 20:54:17.141981 waagent[1896]: 2024-11-12T20:54:17.136692Z INFO Daemon Daemon Set hostname [ci-4081.2.0-a-1543c8d709] Nov 12 20:54:17.280086 waagent[1896]: 2024-11-12T20:54:17.279995Z INFO Daemon Daemon Publish hostname [ci-4081.2.0-a-1543c8d709] Nov 12 20:54:17.284640 waagent[1896]: 2024-11-12T20:54:17.284558Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 12 20:54:17.288086 waagent[1896]: 2024-11-12T20:54:17.288028Z INFO Daemon Daemon Primary interface is [eth0] Nov 12 20:54:17.303694 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:17.303702 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:54:17.303748 systemd-networkd[1366]: eth0: DHCP lease lost Nov 12 20:54:17.304923 waagent[1896]: 2024-11-12T20:54:17.304585Z INFO Daemon Daemon Create user account if not exists Nov 12 20:54:17.307746 waagent[1896]: 2024-11-12T20:54:17.307695Z INFO Daemon Daemon User core already exists, skip useradd Nov 12 20:54:17.322520 waagent[1896]: 2024-11-12T20:54:17.308818Z INFO Daemon Daemon Configure sudoer Nov 12 20:54:17.322520 waagent[1896]: 2024-11-12T20:54:17.309973Z INFO Daemon Daemon Configure sshd Nov 12 20:54:17.322520 waagent[1896]: 2024-11-12T20:54:17.311002Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 12 20:54:17.322520 waagent[1896]: 2024-11-12T20:54:17.311803Z INFO Daemon Daemon Deploy ssh public key. Nov 12 20:54:17.322588 systemd-networkd[1366]: eth0: DHCPv6 lease lost Nov 12 20:54:17.357211 systemd-networkd[1366]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:54:25.556010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:54:25.561405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:25.663352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:25.666338 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:26.246916 kubelet[2009]: E1112 20:54:26.246855 2009 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:26.250958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:26.251272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:36.268968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:54:36.274376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:36.366347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:36.375819 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:36.418087 kubelet[2030]: E1112 20:54:36.418030 2030 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:36.420394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:36.420657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:36.790136 chronyd[1744]: Selected source PHC0 Nov 12 20:54:46.518954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:54:46.524372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:46.623144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:46.625908 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:47.213174 kubelet[2051]: E1112 20:54:47.213103 2051 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:47.215827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:47.216133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:47.390179 waagent[1896]: 2024-11-12T20:54:47.390109Z INFO Daemon Daemon Provisioning complete Nov 12 20:54:47.404517 waagent[1896]: 2024-11-12T20:54:47.404466Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 12 20:54:47.411576 waagent[1896]: 2024-11-12T20:54:47.405632Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 12 20:54:47.411576 waagent[1896]: 2024-11-12T20:54:47.406515Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 12 20:54:47.527080 waagent[2060]: 2024-11-12T20:54:47.527000Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 12 20:54:47.527515 waagent[2060]: 2024-11-12T20:54:47.527137Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.0 Nov 12 20:54:47.527515 waagent[2060]: 2024-11-12T20:54:47.527244Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 12 20:54:47.545801 waagent[2060]: 2024-11-12T20:54:47.545739Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 12 20:54:47.545982 waagent[2060]: 2024-11-12T20:54:47.545937Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:54:47.546059 waagent[2060]: 2024-11-12T20:54:47.546026Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:54:47.552897 waagent[2060]: 2024-11-12T20:54:47.552837Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 12 20:54:47.563589 waagent[2060]: 2024-11-12T20:54:47.563539Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Nov 12 20:54:47.564013 waagent[2060]: 2024-11-12T20:54:47.563958Z INFO ExtHandler Nov 12 20:54:47.564086 waagent[2060]: 2024-11-12T20:54:47.564047Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f2e625ed-5e73-4627-acae-da3994b7ef77 eTag: 1289955219394523136 source: Fabric] Nov 12 20:54:47.564451 waagent[2060]: 2024-11-12T20:54:47.564399Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 12 20:54:47.564995 waagent[2060]: 2024-11-12T20:54:47.564938Z INFO ExtHandler Nov 12 20:54:47.565068 waagent[2060]: 2024-11-12T20:54:47.565021Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 12 20:54:47.568590 waagent[2060]: 2024-11-12T20:54:47.568550Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 12 20:54:47.647069 waagent[2060]: 2024-11-12T20:54:47.646994Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BE3074797D975ECA7F230CE73744118CDC25FB33', 'hasPrivateKey': False} Nov 12 20:54:47.647467 waagent[2060]: 2024-11-12T20:54:47.647417Z INFO ExtHandler Downloaded certificate {'thumbprint': '0084BA97CA7AFA5187FC7EC0F0B6EBC0FC488211', 'hasPrivateKey': True} Nov 12 20:54:47.647874 waagent[2060]: 2024-11-12T20:54:47.647824Z INFO ExtHandler Fetch goal state completed Nov 12 20:54:47.665212 waagent[2060]: 2024-11-12T20:54:47.665139Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2060 Nov 12 20:54:47.665359 waagent[2060]: 2024-11-12T20:54:47.665314Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 12 20:54:47.666852 waagent[2060]: 2024-11-12T20:54:47.666797Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 12 20:54:47.667240 waagent[2060]: 2024-11-12T20:54:47.667194Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 12 20:54:47.679895 waagent[2060]: 2024-11-12T20:54:47.679860Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 12 20:54:47.680065 waagent[2060]: 2024-11-12T20:54:47.680023Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 12 20:54:47.686282 waagent[2060]: 2024-11-12T20:54:47.686119Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 12 20:54:47.692744 systemd[1]: Reloading requested from client PID 2075 ('systemctl') (unit waagent.service)... Nov 12 20:54:47.692761 systemd[1]: Reloading... Nov 12 20:54:47.775184 zram_generator::config[2109]: No configuration found. Nov 12 20:54:47.906791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:47.982470 systemd[1]: Reloading finished in 289 ms. Nov 12 20:54:48.008949 waagent[2060]: 2024-11-12T20:54:48.008487Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 12 20:54:48.015648 systemd[1]: Reloading requested from client PID 2171 ('systemctl') (unit waagent.service)... Nov 12 20:54:48.015662 systemd[1]: Reloading... Nov 12 20:54:48.105203 zram_generator::config[2211]: No configuration found. Nov 12 20:54:48.220330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:48.295639 systemd[1]: Reloading finished in 279 ms. Nov 12 20:54:48.319196 waagent[2060]: 2024-11-12T20:54:48.318487Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 12 20:54:48.319196 waagent[2060]: 2024-11-12T20:54:48.318680Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 12 20:54:48.425282 waagent[2060]: 2024-11-12T20:54:48.425203Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 12 20:54:48.425827 waagent[2060]: 2024-11-12T20:54:48.425770Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 12 20:54:48.426573 waagent[2060]: 2024-11-12T20:54:48.426524Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 12 20:54:48.427059 waagent[2060]: 2024-11-12T20:54:48.426984Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 12 20:54:48.427214 waagent[2060]: 2024-11-12T20:54:48.427133Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:54:48.427487 waagent[2060]: 2024-11-12T20:54:48.427432Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 12 20:54:48.427663 waagent[2060]: 2024-11-12T20:54:48.427580Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 12 20:54:48.427883 waagent[2060]: 2024-11-12T20:54:48.427843Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:54:48.428309 waagent[2060]: 2024-11-12T20:54:48.428262Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:54:48.428424 waagent[2060]: 2024-11-12T20:54:48.428374Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 12 20:54:48.428909 waagent[2060]: 2024-11-12T20:54:48.428796Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 12 20:54:48.428909 waagent[2060]: 2024-11-12T20:54:48.428851Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 12 20:54:48.429015 waagent[2060]: 2024-11-12T20:54:48.428969Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 12 20:54:48.429197 waagent[2060]: 2024-11-12T20:54:48.429124Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:54:48.429574 waagent[2060]: 2024-11-12T20:54:48.429517Z INFO EnvHandler ExtHandler Configure routes Nov 12 20:54:48.430030 waagent[2060]: 2024-11-12T20:54:48.429962Z INFO EnvHandler ExtHandler Gateway:None Nov 12 20:54:48.430122 waagent[2060]: 2024-11-12T20:54:48.430032Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 12 20:54:48.430122 waagent[2060]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 12 20:54:48.430122 waagent[2060]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 12 20:54:48.430122 waagent[2060]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 12 20:54:48.430122 waagent[2060]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:54:48.430122 waagent[2060]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:54:48.430122 waagent[2060]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:54:48.432054 waagent[2060]: 2024-11-12T20:54:48.431955Z INFO EnvHandler ExtHandler Routes:None Nov 12 20:54:48.436472 waagent[2060]: 2024-11-12T20:54:48.436429Z INFO ExtHandler ExtHandler Nov 12 20:54:48.436574 waagent[2060]: 2024-11-12T20:54:48.436529Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6fe27ff2-0e42-48ca-a5e3-0c71e3d38a75 correlation 9adb4ec2-c9f7-4515-b5f4-73983c36fa94 created: 2024-11-12T20:52:37.478821Z] Nov 12 20:54:48.437444 waagent[2060]: 2024-11-12T20:54:48.437345Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 12 20:54:48.439891 waagent[2060]: 2024-11-12T20:54:48.439856Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Nov 12 20:54:48.455659 waagent[2060]: 2024-11-12T20:54:48.455154Z INFO MonitorHandler ExtHandler Network interfaces: Nov 12 20:54:48.455659 waagent[2060]: Executing ['ip', '-a', '-o', 'link']: Nov 12 20:54:48.455659 waagent[2060]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 12 20:54:48.455659 waagent[2060]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:07:e8 brd ff:ff:ff:ff:ff:ff Nov 12 20:54:48.455659 waagent[2060]: 3: enP53518s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:07:e8 brd ff:ff:ff:ff:ff:ff\ altname enP53518p0s2 Nov 12 20:54:48.455659 waagent[2060]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 12 20:54:48.455659 waagent[2060]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 12 20:54:48.455659 waagent[2060]: 2: eth0 inet 10.200.8.44/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 12 20:54:48.455659 waagent[2060]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 12 20:54:48.455659 waagent[2060]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 12 20:54:48.455659 waagent[2060]: 2: eth0 inet6 fe80::20d:3aff:feb6:7e8/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 12 20:54:48.455659 waagent[2060]: 3: enP53518s1 inet6 fe80::20d:3aff:feb6:7e8/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 12 20:54:48.474202 waagent[2060]: 2024-11-12T20:54:48.474094Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B321664D-DFD2-40C5-A2C9-C17C09376E90;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 12 20:54:48.509925 waagent[2060]: 2024-11-12T20:54:48.509867Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 12 20:54:48.509925 waagent[2060]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:54:48.509925 waagent[2060]: pkts bytes target prot opt in out source destination Nov 12 20:54:48.509925 waagent[2060]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:54:48.509925 waagent[2060]: pkts bytes target prot opt in out source destination Nov 12 20:54:48.509925 waagent[2060]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:54:48.509925 waagent[2060]: pkts bytes target prot opt in out source destination Nov 12 20:54:48.509925 waagent[2060]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 12 20:54:48.509925 waagent[2060]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 12 20:54:48.509925 waagent[2060]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 12 20:54:48.512979 waagent[2060]: 2024-11-12T20:54:48.512923Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 12 20:54:48.512979 waagent[2060]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:54:48.512979 waagent[2060]: pkts bytes target prot opt in out source destination Nov 12 20:54:48.512979 waagent[2060]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:54:48.512979 waagent[2060]: pkts bytes target prot opt in out source destination Nov 12 20:54:48.512979 waagent[2060]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:54:48.512979 waagent[2060]: pkts bytes target prot opt in out source destination Nov 12 20:54:48.512979 waagent[2060]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 12 20:54:48.512979 waagent[2060]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 12 20:54:48.512979 waagent[2060]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 12 20:54:48.513400 waagent[2060]: 2024-11-12T20:54:48.513230Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 12 20:54:48.832808 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 12 20:54:57.268740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 12 20:54:57.281717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:57.629411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:57.637713 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:57.916546 kubelet[2313]: E1112 20:54:57.916418 2313 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:57.919235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:57.919568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:58.932750 update_engine[1763]: I20241112 20:54:58.932569 1763 update_attempter.cc:509] Updating boot flags... Nov 12 20:54:58.985821 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2335) Nov 12 20:54:59.082200 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2339) Nov 12 20:55:08.019008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 12 20:55:08.026702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:08.381357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:08.384054 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:08.658912 kubelet[2401]: E1112 20:55:08.658797 2401 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:08.661631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:08.661960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:14.996302 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:55:15.007837 systemd[1]: Started sshd@0-10.200.8.44:22-10.200.16.10:51984.service - OpenSSH per-connection server daemon (10.200.16.10:51984). Nov 12 20:55:15.643485 sshd[2409]: Accepted publickey for core from 10.200.16.10 port 51984 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:15.645236 sshd[2409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:15.650200 systemd-logind[1761]: New session 3 of user core. Nov 12 20:55:15.656672 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:55:16.190800 systemd[1]: Started sshd@1-10.200.8.44:22-10.200.16.10:51994.service - OpenSSH per-connection server daemon (10.200.16.10:51994). Nov 12 20:55:16.813411 sshd[2414]: Accepted publickey for core from 10.200.16.10 port 51994 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:16.815070 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:16.819580 systemd-logind[1761]: New session 4 of user core. Nov 12 20:55:16.830051 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:55:17.260938 sshd[2414]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:17.264411 systemd[1]: sshd@1-10.200.8.44:22-10.200.16.10:51994.service: Deactivated successfully. Nov 12 20:55:17.269118 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:55:17.269832 systemd-logind[1761]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:55:17.270863 systemd-logind[1761]: Removed session 4. Nov 12 20:55:17.367431 systemd[1]: Started sshd@2-10.200.8.44:22-10.200.16.10:52008.service - OpenSSH per-connection server daemon (10.200.16.10:52008). Nov 12 20:55:17.989364 sshd[2422]: Accepted publickey for core from 10.200.16.10 port 52008 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:17.992321 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:17.997062 systemd-logind[1761]: New session 5 of user core. Nov 12 20:55:18.006523 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:55:18.431392 sshd[2422]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:18.435179 systemd[1]: sshd@2-10.200.8.44:22-10.200.16.10:52008.service: Deactivated successfully. Nov 12 20:55:18.441203 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:55:18.441930 systemd-logind[1761]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:55:18.442795 systemd-logind[1761]: Removed session 5. Nov 12 20:55:18.539494 systemd[1]: Started sshd@3-10.200.8.44:22-10.200.16.10:34858.service - OpenSSH per-connection server daemon (10.200.16.10:34858). Nov 12 20:55:18.768862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 12 20:55:18.780728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:18.990725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:18.990994 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:19.160813 sshd[2430]: Accepted publickey for core from 10.200.16.10 port 34858 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:19.162141 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:19.166218 systemd-logind[1761]: New session 6 of user core. Nov 12 20:55:19.173471 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:55:19.395526 kubelet[2444]: E1112 20:55:19.395469 2444 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:19.398239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:19.398602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:19.608307 sshd[2430]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:19.611499 systemd[1]: sshd@3-10.200.8.44:22-10.200.16.10:34858.service: Deactivated successfully. Nov 12 20:55:19.617065 systemd-logind[1761]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:55:19.617610 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:55:19.618975 systemd-logind[1761]: Removed session 6. Nov 12 20:55:19.722642 systemd[1]: Started sshd@4-10.200.8.44:22-10.200.16.10:34868.service - OpenSSH per-connection server daemon (10.200.16.10:34868). Nov 12 20:55:20.343177 sshd[2460]: Accepted publickey for core from 10.200.16.10 port 34868 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:20.344746 sshd[2460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:20.349944 systemd-logind[1761]: New session 7 of user core. Nov 12 20:55:20.359597 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:55:20.726272 sudo[2464]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:55:20.726633 sudo[2464]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:20.744406 sudo[2464]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:20.844671 sshd[2460]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:20.848297 systemd[1]: sshd@4-10.200.8.44:22-10.200.16.10:34868.service: Deactivated successfully. Nov 12 20:55:20.852801 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:55:20.853612 systemd-logind[1761]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:55:20.854527 systemd-logind[1761]: Removed session 7. Nov 12 20:55:20.957646 systemd[1]: Started sshd@5-10.200.8.44:22-10.200.16.10:34872.service - OpenSSH per-connection server daemon (10.200.16.10:34872). Nov 12 20:55:21.579044 sshd[2469]: Accepted publickey for core from 10.200.16.10 port 34872 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:21.580726 sshd[2469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:21.585221 systemd-logind[1761]: New session 8 of user core. Nov 12 20:55:21.591658 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:55:21.924909 sudo[2474]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:55:21.925452 sudo[2474]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:21.928607 sudo[2474]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:21.933305 sudo[2473]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:55:21.933641 sudo[2473]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:21.945445 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:21.947488 auditctl[2477]: No rules Nov 12 20:55:21.947833 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:55:21.948067 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:21.958924 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:21.979678 augenrules[2496]: No rules Nov 12 20:55:21.981147 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:21.983411 sudo[2473]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:22.085331 sshd[2469]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:22.088657 systemd[1]: sshd@5-10.200.8.44:22-10.200.16.10:34872.service: Deactivated successfully. Nov 12 20:55:22.092880 systemd-logind[1761]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:55:22.093305 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:55:22.094380 systemd-logind[1761]: Removed session 8. Nov 12 20:55:22.191411 systemd[1]: Started sshd@6-10.200.8.44:22-10.200.16.10:34874.service - OpenSSH per-connection server daemon (10.200.16.10:34874). Nov 12 20:55:22.813772 sshd[2505]: Accepted publickey for core from 10.200.16.10 port 34874 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:22.815479 sshd[2505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:22.820755 systemd-logind[1761]: New session 9 of user core. Nov 12 20:55:22.828868 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:55:23.158646 sudo[2509]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:55:23.159002 sudo[2509]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:23.591030 (dockerd)[2524]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:55:23.591040 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:55:24.085741 dockerd[2524]: time="2024-11-12T20:55:24.085683211Z" level=info msg="Starting up" Nov 12 20:55:24.324144 dockerd[2524]: time="2024-11-12T20:55:24.324012889Z" level=info msg="Loading containers: start." Nov 12 20:55:24.424183 kernel: Initializing XFRM netlink socket Nov 12 20:55:24.489087 systemd-networkd[1366]: docker0: Link UP Nov 12 20:55:24.520626 dockerd[2524]: time="2024-11-12T20:55:24.520592586Z" level=info msg="Loading containers: done." Nov 12 20:55:24.541981 dockerd[2524]: time="2024-11-12T20:55:24.541933381Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:55:24.542201 dockerd[2524]: time="2024-11-12T20:55:24.542027281Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:55:24.542201 dockerd[2524]: time="2024-11-12T20:55:24.542136782Z" level=info msg="Daemon has completed initialization" Nov 12 20:55:24.601910 dockerd[2524]: time="2024-11-12T20:55:24.601796328Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:55:24.602199 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:55:25.767341 containerd[1782]: time="2024-11-12T20:55:25.767292644Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:55:26.452043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115825271.mount: Deactivated successfully. Nov 12 20:55:28.254179 containerd[1782]: time="2024-11-12T20:55:28.254126867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.256060 containerd[1782]: time="2024-11-12T20:55:28.256004083Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140807" Nov 12 20:55:28.259650 containerd[1782]: time="2024-11-12T20:55:28.259512214Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.267568 containerd[1782]: time="2024-11-12T20:55:28.267520783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.268734 containerd[1782]: time="2024-11-12T20:55:28.268556092Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.501223348s" Nov 12 20:55:28.268734 containerd[1782]: time="2024-11-12T20:55:28.268598992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:55:28.290098 containerd[1782]: time="2024-11-12T20:55:28.290061578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:55:29.518795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 12 20:55:29.527999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:29.668826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:29.679013 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:29.752212 kubelet[2740]: E1112 20:55:29.752140 2740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:29.755106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:29.755464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:30.275349 containerd[1782]: time="2024-11-12T20:55:30.275295860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:30.277514 containerd[1782]: time="2024-11-12T20:55:30.277368578Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218307" Nov 12 20:55:30.281129 containerd[1782]: time="2024-11-12T20:55:30.280886209Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:30.286013 containerd[1782]: time="2024-11-12T20:55:30.285959153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:30.287156 containerd[1782]: time="2024-11-12T20:55:30.287025462Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 1.996925584s" Nov 12 20:55:30.287156 containerd[1782]: time="2024-11-12T20:55:30.287065262Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:55:30.308821 containerd[1782]: time="2024-11-12T20:55:30.308797650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:55:31.455544 containerd[1782]: time="2024-11-12T20:55:31.455493975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:31.457535 containerd[1782]: time="2024-11-12T20:55:31.457485692Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332668" Nov 12 20:55:31.463145 containerd[1782]: time="2024-11-12T20:55:31.463090741Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:31.468283 containerd[1782]: time="2024-11-12T20:55:31.468226085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:31.469439 containerd[1782]: time="2024-11-12T20:55:31.469271694Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.160443744s" Nov 12 20:55:31.469439 containerd[1782]: time="2024-11-12T20:55:31.469310194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:55:31.490028 containerd[1782]: time="2024-11-12T20:55:31.489996373Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:55:32.699213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123839172.mount: Deactivated successfully. Nov 12 20:55:33.156662 containerd[1782]: time="2024-11-12T20:55:33.156612047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:33.159062 containerd[1782]: time="2024-11-12T20:55:33.158921467Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616824" Nov 12 20:55:33.162866 containerd[1782]: time="2024-11-12T20:55:33.162754401Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:33.168678 containerd[1782]: time="2024-11-12T20:55:33.168618753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:33.169332 containerd[1782]: time="2024-11-12T20:55:33.169185658Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 1.679131384s" Nov 12 20:55:33.169332 containerd[1782]: time="2024-11-12T20:55:33.169222258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:55:33.190718 containerd[1782]: time="2024-11-12T20:55:33.190688848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:55:33.713927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2855556987.mount: Deactivated successfully. Nov 12 20:55:34.932104 containerd[1782]: time="2024-11-12T20:55:34.931992226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:34.935434 containerd[1782]: time="2024-11-12T20:55:34.935296055Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Nov 12 20:55:34.939479 containerd[1782]: time="2024-11-12T20:55:34.939427192Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:34.944477 containerd[1782]: time="2024-11-12T20:55:34.944430636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:34.946259 containerd[1782]: time="2024-11-12T20:55:34.945424645Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.754697196s" Nov 12 20:55:34.946259 containerd[1782]: time="2024-11-12T20:55:34.945463045Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:55:34.967304 containerd[1782]: time="2024-11-12T20:55:34.967275238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:55:35.528631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692234891.mount: Deactivated successfully. Nov 12 20:55:35.548721 containerd[1782]: time="2024-11-12T20:55:35.548679972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:35.550616 containerd[1782]: time="2024-11-12T20:55:35.550563889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Nov 12 20:55:35.555029 containerd[1782]: time="2024-11-12T20:55:35.554980128Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:35.560786 containerd[1782]: time="2024-11-12T20:55:35.560752779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:35.561598 containerd[1782]: time="2024-11-12T20:55:35.561465785Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 594.132347ms" Nov 12 20:55:35.561598 containerd[1782]: time="2024-11-12T20:55:35.561501385Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:55:35.583796 containerd[1782]: time="2024-11-12T20:55:35.583589280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:55:36.216258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169462381.mount: Deactivated successfully. Nov 12 20:55:38.393926 containerd[1782]: time="2024-11-12T20:55:38.393795498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.396232 containerd[1782]: time="2024-11-12T20:55:38.396187419Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Nov 12 20:55:38.399669 containerd[1782]: time="2024-11-12T20:55:38.399618550Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.403981 containerd[1782]: time="2024-11-12T20:55:38.403851787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.405869 containerd[1782]: time="2024-11-12T20:55:38.405515002Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.821882521s" Nov 12 20:55:38.405869 containerd[1782]: time="2024-11-12T20:55:38.405564602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:55:39.768701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Nov 12 20:55:39.780270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:39.971356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:39.972710 (kubelet)[2950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:40.434877 kubelet[2950]: E1112 20:55:40.434823 2950 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:40.438678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:40.438888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:41.532234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:41.539437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:41.565066 systemd[1]: Reloading requested from client PID 2966 ('systemctl') (unit session-9.scope)... Nov 12 20:55:41.565082 systemd[1]: Reloading... Nov 12 20:55:41.661191 zram_generator::config[3002]: No configuration found. Nov 12 20:55:41.805692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:41.884242 systemd[1]: Reloading finished in 318 ms. Nov 12 20:55:41.927804 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:55:41.928091 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:55:41.928462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:41.931593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:42.162341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:42.168284 (kubelet)[3085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:55:42.209136 kubelet[3085]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:55:42.209136 kubelet[3085]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:55:42.209136 kubelet[3085]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:55:42.209559 kubelet[3085]: I1112 20:55:42.209197 3085 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:55:42.780610 kubelet[3085]: I1112 20:55:42.780561 3085 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:55:42.780610 kubelet[3085]: I1112 20:55:42.780601 3085 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:55:42.781789 kubelet[3085]: I1112 20:55:42.780929 3085 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:55:42.801947 kubelet[3085]: E1112 20:55:42.801913 3085 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.805080 kubelet[3085]: I1112 20:55:42.804957 3085 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:55:42.813967 kubelet[3085]: I1112 20:55:42.813946 3085 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:55:42.815392 kubelet[3085]: I1112 20:55:42.815364 3085 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:55:42.815586 kubelet[3085]: I1112 20:55:42.815552 3085 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:55:42.815742 kubelet[3085]: I1112 20:55:42.815594 3085 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:55:42.815742 kubelet[3085]: I1112 20:55:42.815609 3085 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:55:42.815742 kubelet[3085]: I1112 20:55:42.815732 3085 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:55:42.815856 kubelet[3085]: I1112 20:55:42.815840 3085 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:55:42.815901 kubelet[3085]: I1112 20:55:42.815858 3085 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:55:42.815901 kubelet[3085]: I1112 20:55:42.815888 3085 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:55:42.816082 kubelet[3085]: I1112 20:55:42.815907 3085 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:55:42.818386 kubelet[3085]: W1112 20:55:42.818288 3085 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.818386 kubelet[3085]: E1112 20:55:42.818341 3085 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.819230 kubelet[3085]: I1112 20:55:42.818936 3085 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:55:42.822513 kubelet[3085]: I1112 20:55:42.822488 3085 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:55:42.823537 kubelet[3085]: W1112 20:55:42.823518 3085 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:55:42.826537 kubelet[3085]: W1112 20:55:42.826493 3085 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-1543c8d709&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.826615 kubelet[3085]: E1112 20:55:42.826552 3085 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-1543c8d709&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.826812 kubelet[3085]: I1112 20:55:42.826794 3085 server.go:1256] "Started kubelet" Nov 12 20:55:42.827326 kubelet[3085]: I1112 20:55:42.827306 3085 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:55:42.827723 kubelet[3085]: I1112 20:55:42.827704 3085 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:55:42.827851 kubelet[3085]: I1112 20:55:42.827839 3085 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:55:42.829185 kubelet[3085]: I1112 20:55:42.828951 3085 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:55:42.829266 kubelet[3085]: I1112 20:55:42.829192 3085 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:55:42.835658 kubelet[3085]: I1112 20:55:42.835631 3085 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:55:42.838955 kubelet[3085]: I1112 20:55:42.838669 3085 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:55:42.838955 kubelet[3085]: I1112 20:55:42.838744 3085 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:55:42.840671 kubelet[3085]: E1112 20:55:42.840650 3085 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-a-1543c8d709.180754007d560c9e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-a-1543c8d709,UID:ci-4081.2.0-a-1543c8d709,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-a-1543c8d709,},FirstTimestamp:2024-11-12 20:55:42.826769566 +0000 UTC m=+0.654105518,LastTimestamp:2024-11-12 20:55:42.826769566 +0000 UTC m=+0.654105518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-a-1543c8d709,}" Nov 12 20:55:42.840804 kubelet[3085]: E1112 20:55:42.840764 3085 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-1543c8d709?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="200ms" Nov 12 20:55:42.841906 kubelet[3085]: I1112 20:55:42.841383 3085 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:55:42.842588 kubelet[3085]: W1112 20:55:42.842546 3085 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.842668 kubelet[3085]: E1112 20:55:42.842596 3085 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.842945 kubelet[3085]: E1112 20:55:42.842922 3085 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:55:42.843203 kubelet[3085]: I1112 20:55:42.843189 3085 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:55:42.843203 kubelet[3085]: I1112 20:55:42.843205 3085 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:55:42.856521 kubelet[3085]: I1112 20:55:42.856495 3085 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:55:42.857495 kubelet[3085]: I1112 20:55:42.857470 3085 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:55:42.857495 kubelet[3085]: I1112 20:55:42.857496 3085 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:55:42.857609 kubelet[3085]: I1112 20:55:42.857512 3085 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:55:42.857609 kubelet[3085]: E1112 20:55:42.857556 3085 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:55:42.864609 kubelet[3085]: W1112 20:55:42.864526 3085 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.864609 kubelet[3085]: E1112 20:55:42.864570 3085 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:42.895975 kubelet[3085]: I1112 20:55:42.895937 3085 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:55:42.895975 kubelet[3085]: I1112 20:55:42.895956 3085 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:55:42.895975 kubelet[3085]: I1112 20:55:42.895974 3085 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:55:42.900443 kubelet[3085]: I1112 20:55:42.900421 3085 policy_none.go:49] "None policy: Start" Nov 12 20:55:42.901047 kubelet[3085]: I1112 20:55:42.901005 3085 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:55:42.901047 kubelet[3085]: I1112 20:55:42.901032 3085 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:55:42.910179 kubelet[3085]: I1112 20:55:42.909818 3085 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:55:42.910179 kubelet[3085]: I1112 20:55:42.910067 3085 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:55:42.912046 kubelet[3085]: E1112 20:55:42.912027 3085 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.0-a-1543c8d709\" not found" Nov 12 20:55:42.937787 kubelet[3085]: I1112 20:55:42.937762 3085 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:42.938097 kubelet[3085]: E1112 20:55:42.938074 3085 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:42.958581 kubelet[3085]: I1112 20:55:42.958529 3085 topology_manager.go:215] "Topology Admit Handler" podUID="bedae4934b135260a4dc90a054826da7" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:42.959997 kubelet[3085]: I1112 20:55:42.959973 3085 topology_manager.go:215] "Topology Admit Handler" podUID="3a9720762d7dd3998a3be4965b3ee7aa" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:42.963120 kubelet[3085]: I1112 20:55:42.961420 3085 topology_manager.go:215] "Topology Admit Handler" podUID="e86d017d312a9eb652c93e87f8d7702c" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.042042 kubelet[3085]: E1112 20:55:43.041956 3085 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-1543c8d709?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="400ms" Nov 12 20:55:43.139947 kubelet[3085]: I1112 20:55:43.139659 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.139947 kubelet[3085]: I1112 20:55:43.139721 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.139947 kubelet[3085]: I1112 20:55:43.139754 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.139947 kubelet[3085]: I1112 20:55:43.139791 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.139947 kubelet[3085]: I1112 20:55:43.139823 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bedae4934b135260a4dc90a054826da7-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-1543c8d709\" (UID: \"bedae4934b135260a4dc90a054826da7\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.140313 kubelet[3085]: I1112 20:55:43.139858 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bedae4934b135260a4dc90a054826da7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-1543c8d709\" (UID: \"bedae4934b135260a4dc90a054826da7\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.140313 kubelet[3085]: I1112 20:55:43.139889 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.140313 kubelet[3085]: I1112 20:55:43.139922 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e86d017d312a9eb652c93e87f8d7702c-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-1543c8d709\" (UID: \"e86d017d312a9eb652c93e87f8d7702c\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.140313 kubelet[3085]: I1112 20:55:43.139953 3085 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bedae4934b135260a4dc90a054826da7-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-1543c8d709\" (UID: \"bedae4934b135260a4dc90a054826da7\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.140577 kubelet[3085]: I1112 20:55:43.140446 3085 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.140924 kubelet[3085]: E1112 20:55:43.140901 3085 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.265527 containerd[1782]: time="2024-11-12T20:55:43.265464068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-1543c8d709,Uid:bedae4934b135260a4dc90a054826da7,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:43.268808 containerd[1782]: time="2024-11-12T20:55:43.268687796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-1543c8d709,Uid:3a9720762d7dd3998a3be4965b3ee7aa,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:43.270572 containerd[1782]: time="2024-11-12T20:55:43.270450112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-1543c8d709,Uid:e86d017d312a9eb652c93e87f8d7702c,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:43.442843 kubelet[3085]: E1112 20:55:43.442762 3085 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-1543c8d709?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="800ms" Nov 12 20:55:43.543336 kubelet[3085]: I1112 20:55:43.543306 3085 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.543636 kubelet[3085]: E1112 20:55:43.543600 3085 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:43.855748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197521422.mount: Deactivated successfully. Nov 12 20:55:43.885480 containerd[1782]: time="2024-11-12T20:55:43.885439081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:43.888004 containerd[1782]: time="2024-11-12T20:55:43.887910003Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:43.890356 containerd[1782]: time="2024-11-12T20:55:43.890236724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 12 20:55:43.893586 containerd[1782]: time="2024-11-12T20:55:43.893534553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:55:43.896490 containerd[1782]: time="2024-11-12T20:55:43.896461879Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:43.899567 containerd[1782]: time="2024-11-12T20:55:43.899270604Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:43.903566 containerd[1782]: time="2024-11-12T20:55:43.903303340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:55:43.907461 containerd[1782]: time="2024-11-12T20:55:43.907431976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:55:43.908268 containerd[1782]: time="2024-11-12T20:55:43.908235284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 642.677215ms" Nov 12 20:55:43.909613 containerd[1782]: time="2024-11-12T20:55:43.909580096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.056483ms" Nov 12 20:55:43.913801 containerd[1782]: time="2024-11-12T20:55:43.913766633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.913435ms" Nov 12 20:55:43.976621 kubelet[3085]: W1112 20:55:43.976487 3085 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-1543c8d709&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:43.976621 kubelet[3085]: E1112 20:55:43.976590 3085 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-1543c8d709&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:44.005580 kubelet[3085]: W1112 20:55:44.004932 3085 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:44.005580 kubelet[3085]: E1112 20:55:44.004997 3085 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:44.021830 kubelet[3085]: W1112 20:55:44.020202 3085 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:44.021830 kubelet[3085]: E1112 20:55:44.020258 3085 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:44.185833 containerd[1782]: time="2024-11-12T20:55:44.182813125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:44.185833 containerd[1782]: time="2024-11-12T20:55:44.183264329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:44.185833 containerd[1782]: time="2024-11-12T20:55:44.183280929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:44.185833 containerd[1782]: time="2024-11-12T20:55:44.183438931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:44.201452 containerd[1782]: time="2024-11-12T20:55:44.201063688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:44.201452 containerd[1782]: time="2024-11-12T20:55:44.201190489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:44.201452 containerd[1782]: time="2024-11-12T20:55:44.201239989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:44.201452 containerd[1782]: time="2024-11-12T20:55:44.201345890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:44.206065 containerd[1782]: time="2024-11-12T20:55:44.205997631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:44.206694 containerd[1782]: time="2024-11-12T20:55:44.206649137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:44.206848 containerd[1782]: time="2024-11-12T20:55:44.206816039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:44.207628 containerd[1782]: time="2024-11-12T20:55:44.207585546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:44.236472 kubelet[3085]: W1112 20:55:44.236381 3085 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:44.236472 kubelet[3085]: E1112 20:55:44.236457 3085 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused Nov 12 20:55:44.247820 kubelet[3085]: E1112 20:55:44.243711 3085 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-1543c8d709?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="1.6s" Nov 12 20:55:44.303212 containerd[1782]: time="2024-11-12T20:55:44.303171896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-1543c8d709,Uid:3a9720762d7dd3998a3be4965b3ee7aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb91219f2749ca72038b5a100e8618c96fe772e49574144c3bd2f88cd794566f\"" Nov 12 20:55:44.307951 containerd[1782]: time="2024-11-12T20:55:44.307918538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-1543c8d709,Uid:e86d017d312a9eb652c93e87f8d7702c,Namespace:kube-system,Attempt:0,} returns sandbox id \"21fe1a907f6a0b386d65f175ffbaba4bf36d530ec5a4aeacd324e687a69ab5f0\"" Nov 12 20:55:44.314489 containerd[1782]: time="2024-11-12T20:55:44.314313495Z" level=info msg="CreateContainer within sandbox \"fb91219f2749ca72038b5a100e8618c96fe772e49574144c3bd2f88cd794566f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:55:44.315819 containerd[1782]: time="2024-11-12T20:55:44.315793108Z" level=info msg="CreateContainer within sandbox \"21fe1a907f6a0b386d65f175ffbaba4bf36d530ec5a4aeacd324e687a69ab5f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:55:44.317232 containerd[1782]: time="2024-11-12T20:55:44.317210620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-1543c8d709,Uid:bedae4934b135260a4dc90a054826da7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f170019d04ec2c0788ec815a49b07e77fbbabfa6d1fe3e79181651114a973ec\"" Nov 12 20:55:44.319499 containerd[1782]: time="2024-11-12T20:55:44.319417140Z" level=info msg="CreateContainer within sandbox \"0f170019d04ec2c0788ec815a49b07e77fbbabfa6d1fe3e79181651114a973ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:55:44.345966 kubelet[3085]: I1112 20:55:44.345849 3085 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:44.348446 kubelet[3085]: E1112 20:55:44.346199 3085 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:44.377908 containerd[1782]: time="2024-11-12T20:55:44.377809259Z" level=info msg="CreateContainer within sandbox \"fb91219f2749ca72038b5a100e8618c96fe772e49574144c3bd2f88cd794566f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a8fea949a5c6a1fcb831f2675937b39845506d484a0d40eab800b188af82627\"" Nov 12 20:55:44.378388 containerd[1782]: time="2024-11-12T20:55:44.378363564Z" level=info msg="StartContainer for \"0a8fea949a5c6a1fcb831f2675937b39845506d484a0d40eab800b188af82627\"" Nov 12 20:55:44.402601 containerd[1782]: time="2024-11-12T20:55:44.402281777Z" level=info msg="CreateContainer within sandbox \"21fe1a907f6a0b386d65f175ffbaba4bf36d530ec5a4aeacd324e687a69ab5f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4cfdf7804c354399d974189caa8c3726b33bdc33281c4cf5a2541d9bbbd06065\"" Nov 12 20:55:44.403339 containerd[1782]: time="2024-11-12T20:55:44.403154385Z" level=info msg="StartContainer for \"4cfdf7804c354399d974189caa8c3726b33bdc33281c4cf5a2541d9bbbd06065\"" Nov 12 20:55:44.424031 containerd[1782]: time="2024-11-12T20:55:44.424000970Z" level=info msg="CreateContainer within sandbox \"0f170019d04ec2c0788ec815a49b07e77fbbabfa6d1fe3e79181651114a973ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a720260a3bf2f1e06a48c0d7732a9a6c4d4762c80bc99eeb0824e6d8c596a547\"" Nov 12 20:55:44.425186 containerd[1782]: time="2024-11-12T20:55:44.424631876Z" level=info msg="StartContainer for \"a720260a3bf2f1e06a48c0d7732a9a6c4d4762c80bc99eeb0824e6d8c596a547\"" Nov 12 20:55:44.480452 containerd[1782]: time="2024-11-12T20:55:44.477639347Z" level=info msg="StartContainer for \"0a8fea949a5c6a1fcb831f2675937b39845506d484a0d40eab800b188af82627\" returns successfully" Nov 12 20:55:44.530090 containerd[1782]: time="2024-11-12T20:55:44.530021113Z" level=info msg="StartContainer for \"4cfdf7804c354399d974189caa8c3726b33bdc33281c4cf5a2541d9bbbd06065\" returns successfully" Nov 12 20:55:44.574453 containerd[1782]: time="2024-11-12T20:55:44.574409808Z" level=info msg="StartContainer for \"a720260a3bf2f1e06a48c0d7732a9a6c4d4762c80bc99eeb0824e6d8c596a547\" returns successfully" Nov 12 20:55:45.952249 kubelet[3085]: I1112 20:55:45.952213 3085 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:46.633872 kubelet[3085]: E1112 20:55:46.633827 3085 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.0-a-1543c8d709\" not found" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:46.658703 kubelet[3085]: I1112 20:55:46.658665 3085 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:46.820417 kubelet[3085]: I1112 20:55:46.820387 3085 apiserver.go:52] "Watching apiserver" Nov 12 20:55:46.838971 kubelet[3085]: I1112 20:55:46.838937 3085 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:55:46.905655 kubelet[3085]: E1112 20:55:46.903946 3085 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.0-a-1543c8d709\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:49.420609 systemd[1]: Reloading requested from client PID 3362 ('systemctl') (unit session-9.scope)... Nov 12 20:55:49.420625 systemd[1]: Reloading... Nov 12 20:55:49.510188 zram_generator::config[3402]: No configuration found. Nov 12 20:55:49.635893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:49.717427 systemd[1]: Reloading finished in 296 ms. Nov 12 20:55:49.754335 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:49.764329 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:55:49.764686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:49.784662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:49.887325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:49.894559 (kubelet)[3479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:55:49.952367 kubelet[3479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:55:49.952367 kubelet[3479]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:55:49.952367 kubelet[3479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:55:49.952843 kubelet[3479]: I1112 20:55:49.952431 3479 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:55:49.956552 kubelet[3479]: I1112 20:55:49.956522 3479 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:55:49.956552 kubelet[3479]: I1112 20:55:49.956545 3479 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:55:49.956761 kubelet[3479]: I1112 20:55:49.956742 3479 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:55:49.958045 kubelet[3479]: I1112 20:55:49.958022 3479 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:55:49.959950 kubelet[3479]: I1112 20:55:49.959749 3479 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:55:49.968344 kubelet[3479]: I1112 20:55:49.968261 3479 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:55:49.968713 kubelet[3479]: I1112 20:55:49.968693 3479 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:55:49.968895 kubelet[3479]: I1112 20:55:49.968865 3479 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:55:49.968895 kubelet[3479]: I1112 20:55:49.968894 3479 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:55:49.969083 kubelet[3479]: I1112 20:55:49.968909 3479 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:55:49.969083 kubelet[3479]: I1112 20:55:49.968945 3479 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:55:49.969083 kubelet[3479]: I1112 20:55:49.969037 3479 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:55:49.969083 kubelet[3479]: I1112 20:55:49.969053 3479 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:55:49.969083 kubelet[3479]: I1112 20:55:49.969079 3479 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:55:49.969294 kubelet[3479]: I1112 20:55:49.969097 3479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:55:49.972184 kubelet[3479]: I1112 20:55:49.971768 3479 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:55:49.972184 kubelet[3479]: I1112 20:55:49.971949 3479 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:55:49.972374 kubelet[3479]: I1112 20:55:49.972358 3479 server.go:1256] "Started kubelet" Nov 12 20:55:49.989173 kubelet[3479]: I1112 20:55:49.986536 3479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:55:49.993511 kubelet[3479]: I1112 20:55:49.993456 3479 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:55:49.997189 kubelet[3479]: I1112 20:55:49.994286 3479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:55:50.006182 kubelet[3479]: I1112 20:55:50.002849 3479 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:55:50.006182 kubelet[3479]: I1112 20:55:49.996259 3479 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:55:50.007199 kubelet[3479]: I1112 20:55:49.996280 3479 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:55:50.007199 kubelet[3479]: I1112 20:55:50.006688 3479 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:55:50.010881 kubelet[3479]: I1112 20:55:50.010858 3479 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:55:50.014812 kubelet[3479]: I1112 20:55:50.014787 3479 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:55:50.014891 kubelet[3479]: I1112 20:55:50.014863 3479 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:55:50.020751 kubelet[3479]: I1112 20:55:50.019699 3479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:55:50.021357 kubelet[3479]: I1112 20:55:50.021323 3479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:55:50.021439 kubelet[3479]: I1112 20:55:50.021364 3479 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:55:50.021439 kubelet[3479]: I1112 20:55:50.021384 3479 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:55:50.021439 kubelet[3479]: E1112 20:55:50.021437 3479 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:55:50.031981 kubelet[3479]: E1112 20:55:50.031322 3479 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:55:50.031981 kubelet[3479]: I1112 20:55:50.031369 3479 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:55:50.089776 kubelet[3479]: I1112 20:55:50.089759 3479 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:55:50.089888 kubelet[3479]: I1112 20:55:50.089882 3479 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:55:50.089945 kubelet[3479]: I1112 20:55:50.089940 3479 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:55:50.090092 kubelet[3479]: I1112 20:55:50.090084 3479 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:55:50.090205 kubelet[3479]: I1112 20:55:50.090153 3479 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:55:50.090271 kubelet[3479]: I1112 20:55:50.090263 3479 policy_none.go:49] "None policy: Start" Nov 12 20:55:50.091323 kubelet[3479]: I1112 20:55:50.090877 3479 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:55:50.091323 kubelet[3479]: I1112 20:55:50.090898 3479 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:55:50.091323 kubelet[3479]: I1112 20:55:50.091014 3479 state_mem.go:75] "Updated machine memory state" Nov 12 20:55:50.092237 kubelet[3479]: I1112 20:55:50.092223 3479 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:55:50.092564 kubelet[3479]: I1112 20:55:50.092553 3479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:55:50.101128 kubelet[3479]: I1112 20:55:50.101113 3479 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.111066 kubelet[3479]: I1112 20:55:50.111045 3479 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.111230 kubelet[3479]: I1112 20:55:50.111220 3479 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.123343 kubelet[3479]: I1112 20:55:50.122720 3479 topology_manager.go:215] "Topology Admit Handler" podUID="e86d017d312a9eb652c93e87f8d7702c" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.123343 kubelet[3479]: I1112 20:55:50.122807 3479 topology_manager.go:215] "Topology Admit Handler" podUID="bedae4934b135260a4dc90a054826da7" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.123343 kubelet[3479]: I1112 20:55:50.122853 3479 topology_manager.go:215] "Topology Admit Handler" podUID="3a9720762d7dd3998a3be4965b3ee7aa" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.142667 kubelet[3479]: W1112 20:55:50.142646 3479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:55:50.143198 kubelet[3479]: W1112 20:55:50.142798 3479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:55:50.143551 kubelet[3479]: W1112 20:55:50.142863 3479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:55:50.308527 kubelet[3479]: I1112 20:55:50.308476 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.308677 kubelet[3479]: I1112 20:55:50.308546 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.308677 kubelet[3479]: I1112 20:55:50.308586 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.308677 kubelet[3479]: I1112 20:55:50.308623 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bedae4934b135260a4dc90a054826da7-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-1543c8d709\" (UID: \"bedae4934b135260a4dc90a054826da7\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.308677 kubelet[3479]: I1112 20:55:50.308658 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bedae4934b135260a4dc90a054826da7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-1543c8d709\" (UID: \"bedae4934b135260a4dc90a054826da7\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.308900 kubelet[3479]: I1112 20:55:50.308690 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.308900 kubelet[3479]: I1112 20:55:50.308726 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a9720762d7dd3998a3be4965b3ee7aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-1543c8d709\" (UID: \"3a9720762d7dd3998a3be4965b3ee7aa\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.308900 kubelet[3479]: I1112 20:55:50.308764 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e86d017d312a9eb652c93e87f8d7702c-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-1543c8d709\" (UID: \"e86d017d312a9eb652c93e87f8d7702c\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.308900 kubelet[3479]: I1112 20:55:50.308802 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bedae4934b135260a4dc90a054826da7-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-1543c8d709\" (UID: \"bedae4934b135260a4dc90a054826da7\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:50.971752 kubelet[3479]: I1112 20:55:50.971706 3479 apiserver.go:52] "Watching apiserver" Nov 12 20:55:51.008481 kubelet[3479]: I1112 20:55:51.007568 3479 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:55:51.096189 kubelet[3479]: W1112 20:55:51.092977 3479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:55:51.096189 kubelet[3479]: E1112 20:55:51.093046 3479 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.0-a-1543c8d709\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" Nov 12 20:55:51.174881 kubelet[3479]: I1112 20:55:51.174852 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.0-a-1543c8d709" podStartSLOduration=1.17470059 podStartE2EDuration="1.17470059s" podCreationTimestamp="2024-11-12 20:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:51.149029004 +0000 UTC m=+1.246186976" watchObservedRunningTime="2024-11-12 20:55:51.17470059 +0000 UTC m=+1.271858462" Nov 12 20:55:51.178055 kubelet[3479]: I1112 20:55:51.175738 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.0-a-1543c8d709" podStartSLOduration=1.175702201 podStartE2EDuration="1.175702201s" podCreationTimestamp="2024-11-12 20:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:51.174576289 +0000 UTC m=+1.271734161" watchObservedRunningTime="2024-11-12 20:55:51.175702201 +0000 UTC m=+1.272860073" Nov 12 20:55:51.227544 kubelet[3479]: I1112 20:55:51.227435 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.0-a-1543c8d709" podStartSLOduration=1.227390377 podStartE2EDuration="1.227390377s" podCreationTimestamp="2024-11-12 20:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:51.207243452 +0000 UTC m=+1.304401424" watchObservedRunningTime="2024-11-12 20:55:51.227390377 +0000 UTC m=+1.324548249" Nov 12 20:55:55.147825 sudo[2509]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:55.247924 sshd[2505]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:55.250909 systemd[1]: sshd@6-10.200.8.44:22-10.200.16.10:34874.service: Deactivated successfully. Nov 12 20:55:55.256069 systemd-logind[1761]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:55:55.256469 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:55:55.257942 systemd-logind[1761]: Removed session 9. Nov 12 20:56:03.565069 kubelet[3479]: I1112 20:56:03.565039 3479 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:56:03.567595 containerd[1782]: time="2024-11-12T20:56:03.567541298Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:56:03.568119 kubelet[3479]: I1112 20:56:03.567756 3479 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:56:03.905395 kubelet[3479]: I1112 20:56:03.905274 3479 topology_manager.go:215] "Topology Admit Handler" podUID="8a809e0b-328e-4674-8d90-e8fee917b451" podNamespace="kube-system" podName="kube-proxy-5hmht" Nov 12 20:56:03.995217 kubelet[3479]: I1112 20:56:03.995189 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a809e0b-328e-4674-8d90-e8fee917b451-kube-proxy\") pod \"kube-proxy-5hmht\" (UID: \"8a809e0b-328e-4674-8d90-e8fee917b451\") " pod="kube-system/kube-proxy-5hmht" Nov 12 20:56:03.995367 kubelet[3479]: I1112 20:56:03.995311 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4s59\" (UniqueName: \"kubernetes.io/projected/8a809e0b-328e-4674-8d90-e8fee917b451-kube-api-access-c4s59\") pod \"kube-proxy-5hmht\" (UID: \"8a809e0b-328e-4674-8d90-e8fee917b451\") " pod="kube-system/kube-proxy-5hmht" Nov 12 20:56:03.995431 kubelet[3479]: I1112 20:56:03.995379 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a809e0b-328e-4674-8d90-e8fee917b451-xtables-lock\") pod \"kube-proxy-5hmht\" (UID: \"8a809e0b-328e-4674-8d90-e8fee917b451\") " pod="kube-system/kube-proxy-5hmht" Nov 12 20:56:03.995482 kubelet[3479]: I1112 20:56:03.995449 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a809e0b-328e-4674-8d90-e8fee917b451-lib-modules\") pod \"kube-proxy-5hmht\" (UID: \"8a809e0b-328e-4674-8d90-e8fee917b451\") " pod="kube-system/kube-proxy-5hmht" Nov 12 20:56:04.103118 kubelet[3479]: E1112 20:56:04.103087 3479 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 20:56:04.103118 kubelet[3479]: E1112 20:56:04.103121 3479 projected.go:200] Error preparing data for projected volume kube-api-access-c4s59 for pod kube-system/kube-proxy-5hmht: configmap "kube-root-ca.crt" not found Nov 12 20:56:04.103440 kubelet[3479]: E1112 20:56:04.103201 3479 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a809e0b-328e-4674-8d90-e8fee917b451-kube-api-access-c4s59 podName:8a809e0b-328e-4674-8d90-e8fee917b451 nodeName:}" failed. No retries permitted until 2024-11-12 20:56:04.603179424 +0000 UTC m=+14.700337396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c4s59" (UniqueName: "kubernetes.io/projected/8a809e0b-328e-4674-8d90-e8fee917b451-kube-api-access-c4s59") pod "kube-proxy-5hmht" (UID: "8a809e0b-328e-4674-8d90-e8fee917b451") : configmap "kube-root-ca.crt" not found Nov 12 20:56:04.681592 kubelet[3479]: I1112 20:56:04.680793 3479 topology_manager.go:215] "Topology Admit Handler" podUID="3a207d71-dbe9-44b6-ba06-a2bf1b2b52a1" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-xgvvb" Nov 12 20:56:04.699541 kubelet[3479]: I1112 20:56:04.699454 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a207d71-dbe9-44b6-ba06-a2bf1b2b52a1-var-lib-calico\") pod \"tigera-operator-56b74f76df-xgvvb\" (UID: \"3a207d71-dbe9-44b6-ba06-a2bf1b2b52a1\") " pod="tigera-operator/tigera-operator-56b74f76df-xgvvb" Nov 12 20:56:04.699823 kubelet[3479]: I1112 20:56:04.699645 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6l9z\" (UniqueName: \"kubernetes.io/projected/3a207d71-dbe9-44b6-ba06-a2bf1b2b52a1-kube-api-access-s6l9z\") pod \"tigera-operator-56b74f76df-xgvvb\" (UID: \"3a207d71-dbe9-44b6-ba06-a2bf1b2b52a1\") " pod="tigera-operator/tigera-operator-56b74f76df-xgvvb" Nov 12 20:56:04.813951 containerd[1782]: time="2024-11-12T20:56:04.813378017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hmht,Uid:8a809e0b-328e-4674-8d90-e8fee917b451,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:04.860808 containerd[1782]: time="2024-11-12T20:56:04.860706697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:04.860808 containerd[1782]: time="2024-11-12T20:56:04.860760297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:04.860808 containerd[1782]: time="2024-11-12T20:56:04.860775697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:04.861033 containerd[1782]: time="2024-11-12T20:56:04.860897799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:04.900829 containerd[1782]: time="2024-11-12T20:56:04.900784203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hmht,Uid:8a809e0b-328e-4674-8d90-e8fee917b451,Namespace:kube-system,Attempt:0,} returns sandbox id \"b10e8baf89d37552a02d75191795acffc0e0fdae09b7d5707ca8a49a985bba36\"" Nov 12 20:56:04.904110 containerd[1782]: time="2024-11-12T20:56:04.903915534Z" level=info msg="CreateContainer within sandbox \"b10e8baf89d37552a02d75191795acffc0e0fdae09b7d5707ca8a49a985bba36\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:56:04.943411 containerd[1782]: time="2024-11-12T20:56:04.943309074Z" level=info msg="CreateContainer within sandbox \"b10e8baf89d37552a02d75191795acffc0e0fdae09b7d5707ca8a49a985bba36\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c6cfd8d8e548dfa3819c6d3fe2c31cde5fb2daafc07c15de07b0fe5ef659e854\"" Nov 12 20:56:04.944114 containerd[1782]: time="2024-11-12T20:56:04.944085884Z" level=info msg="StartContainer for \"c6cfd8d8e548dfa3819c6d3fe2c31cde5fb2daafc07c15de07b0fe5ef659e854\"" Nov 12 20:56:04.989321 containerd[1782]: time="2024-11-12T20:56:04.988960934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-xgvvb,Uid:3a207d71-dbe9-44b6-ba06-a2bf1b2b52a1,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:56:04.996588 containerd[1782]: time="2024-11-12T20:56:04.996546127Z" level=info msg="StartContainer for \"c6cfd8d8e548dfa3819c6d3fe2c31cde5fb2daafc07c15de07b0fe5ef659e854\" returns successfully" Nov 12 20:56:05.042990 containerd[1782]: time="2024-11-12T20:56:05.042892096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:05.043267 containerd[1782]: time="2024-11-12T20:56:05.043225900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:05.043351 containerd[1782]: time="2024-11-12T20:56:05.043282101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:05.045137 containerd[1782]: time="2024-11-12T20:56:05.044611817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:05.124894 containerd[1782]: time="2024-11-12T20:56:05.124797400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-xgvvb,Uid:3a207d71-dbe9-44b6-ba06-a2bf1b2b52a1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0c1a7c7792c916bacb4ded2361f46aab010e58ef8cc0dcdb765ca20fd199d5af\"" Nov 12 20:56:05.127147 containerd[1782]: time="2024-11-12T20:56:05.126339719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:56:08.150944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1247364507.mount: Deactivated successfully. Nov 12 20:56:10.042247 kubelet[3479]: I1112 20:56:10.042197 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5hmht" podStartSLOduration=7.042126718 podStartE2EDuration="7.042126718s" podCreationTimestamp="2024-11-12 20:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:05.110283822 +0000 UTC m=+15.207441794" watchObservedRunningTime="2024-11-12 20:56:10.042126718 +0000 UTC m=+20.139284690" Nov 12 20:56:10.296702 containerd[1782]: time="2024-11-12T20:56:10.296596140Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:10.298901 containerd[1782]: time="2024-11-12T20:56:10.298843667Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763367" Nov 12 20:56:10.302877 containerd[1782]: time="2024-11-12T20:56:10.302816316Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:10.307830 containerd[1782]: time="2024-11-12T20:56:10.307781277Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:10.308629 containerd[1782]: time="2024-11-12T20:56:10.308502586Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 5.182124866s" Nov 12 20:56:10.308629 containerd[1782]: time="2024-11-12T20:56:10.308544286Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:56:10.312640 containerd[1782]: time="2024-11-12T20:56:10.312607336Z" level=info msg="CreateContainer within sandbox \"0c1a7c7792c916bacb4ded2361f46aab010e58ef8cc0dcdb765ca20fd199d5af\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:56:10.350289 containerd[1782]: time="2024-11-12T20:56:10.350257998Z" level=info msg="CreateContainer within sandbox \"0c1a7c7792c916bacb4ded2361f46aab010e58ef8cc0dcdb765ca20fd199d5af\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"af3601c81aeb1ddbe9ea5082b43b92f0b23e2da47ba343d30810b850bba7b01f\"" Nov 12 20:56:10.350746 containerd[1782]: time="2024-11-12T20:56:10.350701104Z" level=info msg="StartContainer for \"af3601c81aeb1ddbe9ea5082b43b92f0b23e2da47ba343d30810b850bba7b01f\"" Nov 12 20:56:10.405385 containerd[1782]: time="2024-11-12T20:56:10.405344274Z" level=info msg="StartContainer for \"af3601c81aeb1ddbe9ea5082b43b92f0b23e2da47ba343d30810b850bba7b01f\" returns successfully" Nov 12 20:56:13.685196 kubelet[3479]: I1112 20:56:13.681943 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-xgvvb" podStartSLOduration=4.498839978 podStartE2EDuration="9.681887355s" podCreationTimestamp="2024-11-12 20:56:04 +0000 UTC" firstStartedPulling="2024-11-12 20:56:05.125887614 +0000 UTC m=+15.223045586" lastFinishedPulling="2024-11-12 20:56:10.308935091 +0000 UTC m=+20.406092963" observedRunningTime="2024-11-12 20:56:11.127351453 +0000 UTC m=+21.224509425" watchObservedRunningTime="2024-11-12 20:56:13.681887355 +0000 UTC m=+23.779045227" Nov 12 20:56:13.685196 kubelet[3479]: I1112 20:56:13.682104 3479 topology_manager.go:215] "Topology Admit Handler" podUID="bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1" podNamespace="calico-system" podName="calico-typha-54b4f86c8d-jxr2n" Nov 12 20:56:13.752951 kubelet[3479]: I1112 20:56:13.752921 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1-typha-certs\") pod \"calico-typha-54b4f86c8d-jxr2n\" (UID: \"bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1\") " pod="calico-system/calico-typha-54b4f86c8d-jxr2n" Nov 12 20:56:13.752951 kubelet[3479]: I1112 20:56:13.752967 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1-tigera-ca-bundle\") pod \"calico-typha-54b4f86c8d-jxr2n\" (UID: \"bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1\") " pod="calico-system/calico-typha-54b4f86c8d-jxr2n" Nov 12 20:56:13.753142 kubelet[3479]: I1112 20:56:13.752998 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxpc\" (UniqueName: \"kubernetes.io/projected/bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1-kube-api-access-7nxpc\") pod \"calico-typha-54b4f86c8d-jxr2n\" (UID: \"bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1\") " pod="calico-system/calico-typha-54b4f86c8d-jxr2n" Nov 12 20:56:13.790805 kubelet[3479]: I1112 20:56:13.790564 3479 topology_manager.go:215] "Topology Admit Handler" podUID="a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14" podNamespace="calico-system" podName="calico-node-bv4lt" Nov 12 20:56:13.853291 kubelet[3479]: I1112 20:56:13.853146 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6drc\" (UniqueName: \"kubernetes.io/projected/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-kube-api-access-s6drc\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.853291 kubelet[3479]: I1112 20:56:13.853267 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-lib-modules\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.853989 kubelet[3479]: I1112 20:56:13.853302 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-xtables-lock\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.853989 kubelet[3479]: I1112 20:56:13.853358 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-cni-log-dir\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.853989 kubelet[3479]: I1112 20:56:13.853385 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-policysync\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.853989 kubelet[3479]: I1112 20:56:13.853410 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-cni-bin-dir\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.853989 kubelet[3479]: I1112 20:56:13.853434 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-cni-net-dir\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.854227 kubelet[3479]: I1112 20:56:13.853464 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-tigera-ca-bundle\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.854227 kubelet[3479]: I1112 20:56:13.853704 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-var-run-calico\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.854227 kubelet[3479]: I1112 20:56:13.853929 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-var-lib-calico\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.854227 kubelet[3479]: I1112 20:56:13.854004 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-node-certs\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.854227 kubelet[3479]: I1112 20:56:13.854053 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14-flexvol-driver-host\") pod \"calico-node-bv4lt\" (UID: \"a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14\") " pod="calico-system/calico-node-bv4lt" Nov 12 20:56:13.926417 kubelet[3479]: I1112 20:56:13.923855 3479 topology_manager.go:215] "Topology Admit Handler" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" podNamespace="calico-system" podName="csi-node-driver-6rmfb" Nov 12 20:56:13.926417 kubelet[3479]: E1112 20:56:13.924186 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rmfb" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" Nov 12 20:56:13.954762 kubelet[3479]: I1112 20:56:13.954651 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6ef4534f-dec6-4d07-bd02-f445b758fa12-socket-dir\") pod \"csi-node-driver-6rmfb\" (UID: \"6ef4534f-dec6-4d07-bd02-f445b758fa12\") " pod="calico-system/csi-node-driver-6rmfb" Nov 12 20:56:13.954885 kubelet[3479]: I1112 20:56:13.954796 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6ef4534f-dec6-4d07-bd02-f445b758fa12-varrun\") pod \"csi-node-driver-6rmfb\" (UID: \"6ef4534f-dec6-4d07-bd02-f445b758fa12\") " pod="calico-system/csi-node-driver-6rmfb" Nov 12 20:56:13.954885 kubelet[3479]: I1112 20:56:13.954824 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ef4534f-dec6-4d07-bd02-f445b758fa12-kubelet-dir\") pod \"csi-node-driver-6rmfb\" (UID: \"6ef4534f-dec6-4d07-bd02-f445b758fa12\") " pod="calico-system/csi-node-driver-6rmfb" Nov 12 20:56:13.954885 kubelet[3479]: I1112 20:56:13.954869 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6ef4534f-dec6-4d07-bd02-f445b758fa12-registration-dir\") pod \"csi-node-driver-6rmfb\" (UID: \"6ef4534f-dec6-4d07-bd02-f445b758fa12\") " pod="calico-system/csi-node-driver-6rmfb" Nov 12 20:56:13.955016 kubelet[3479]: I1112 20:56:13.954896 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rklx8\" (UniqueName: \"kubernetes.io/projected/6ef4534f-dec6-4d07-bd02-f445b758fa12-kube-api-access-rklx8\") pod \"csi-node-driver-6rmfb\" (UID: \"6ef4534f-dec6-4d07-bd02-f445b758fa12\") " pod="calico-system/csi-node-driver-6rmfb" Nov 12 20:56:13.958923 kubelet[3479]: E1112 20:56:13.958690 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.958923 kubelet[3479]: W1112 20:56:13.958710 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.958923 kubelet[3479]: E1112 20:56:13.958733 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.959362 kubelet[3479]: E1112 20:56:13.959347 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.959577 kubelet[3479]: W1112 20:56:13.959503 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.959577 kubelet[3479]: E1112 20:56:13.959532 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.960014 kubelet[3479]: E1112 20:56:13.960001 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.962079 kubelet[3479]: W1112 20:56:13.960225 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.962079 kubelet[3479]: E1112 20:56:13.960248 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.963676 kubelet[3479]: E1112 20:56:13.963656 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.963676 kubelet[3479]: W1112 20:56:13.963675 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.963832 kubelet[3479]: E1112 20:56:13.963694 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.966521 kubelet[3479]: E1112 20:56:13.966398 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.966521 kubelet[3479]: W1112 20:56:13.966416 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.966521 kubelet[3479]: E1112 20:56:13.966438 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.968343 kubelet[3479]: E1112 20:56:13.968277 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.968343 kubelet[3479]: W1112 20:56:13.968290 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.969402 kubelet[3479]: E1112 20:56:13.969383 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.969402 kubelet[3479]: W1112 20:56:13.969401 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.969591 kubelet[3479]: E1112 20:56:13.969479 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.969591 kubelet[3479]: E1112 20:56:13.969511 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.969680 kubelet[3479]: E1112 20:56:13.969626 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.969680 kubelet[3479]: W1112 20:56:13.969635 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.970270 kubelet[3479]: E1112 20:56:13.969722 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.970270 kubelet[3479]: E1112 20:56:13.969864 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.970270 kubelet[3479]: W1112 20:56:13.969873 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.970270 kubelet[3479]: E1112 20:56:13.969993 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.970270 kubelet[3479]: E1112 20:56:13.970128 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.970270 kubelet[3479]: W1112 20:56:13.970136 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.970270 kubelet[3479]: E1112 20:56:13.970182 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.972018 kubelet[3479]: E1112 20:56:13.971274 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.972018 kubelet[3479]: W1112 20:56:13.971287 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.972018 kubelet[3479]: E1112 20:56:13.971306 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.973180 kubelet[3479]: E1112 20:56:13.972439 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.973180 kubelet[3479]: W1112 20:56:13.972455 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.974650 kubelet[3479]: E1112 20:56:13.974384 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.974650 kubelet[3479]: W1112 20:56:13.974399 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.979201 kubelet[3479]: E1112 20:56:13.977309 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.979201 kubelet[3479]: W1112 20:56:13.977324 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.979201 kubelet[3479]: E1112 20:56:13.977341 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.979201 kubelet[3479]: E1112 20:56:13.977362 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.979201 kubelet[3479]: E1112 20:56:13.978014 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.979201 kubelet[3479]: W1112 20:56:13.978026 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.979201 kubelet[3479]: E1112 20:56:13.978041 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.979201 kubelet[3479]: E1112 20:56:13.978065 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.979201 kubelet[3479]: E1112 20:56:13.978586 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.979201 kubelet[3479]: W1112 20:56:13.978719 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.979631 kubelet[3479]: E1112 20:56:13.978744 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.979631 kubelet[3479]: E1112 20:56:13.979526 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.979631 kubelet[3479]: W1112 20:56:13.979537 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.979631 kubelet[3479]: E1112 20:56:13.979554 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.986432 kubelet[3479]: E1112 20:56:13.986414 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:13.986432 kubelet[3479]: W1112 20:56:13.986432 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:13.986558 kubelet[3479]: E1112 20:56:13.986449 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:13.995186 containerd[1782]: time="2024-11-12T20:56:13.995133356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54b4f86c8d-jxr2n,Uid:bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:14.057047 kubelet[3479]: E1112 20:56:14.056248 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.057047 kubelet[3479]: W1112 20:56:14.056289 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.057047 kubelet[3479]: E1112 20:56:14.056313 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.057047 kubelet[3479]: E1112 20:56:14.056830 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.057047 kubelet[3479]: W1112 20:56:14.056844 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.057047 kubelet[3479]: E1112 20:56:14.056872 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.057573 kubelet[3479]: E1112 20:56:14.057532 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.057873 kubelet[3479]: W1112 20:56:14.057722 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.058263 kubelet[3479]: E1112 20:56:14.058138 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.058808 kubelet[3479]: E1112 20:56:14.058569 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.058808 kubelet[3479]: W1112 20:56:14.058584 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.058808 kubelet[3479]: E1112 20:56:14.058604 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.059857 kubelet[3479]: E1112 20:56:14.059696 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.059857 kubelet[3479]: W1112 20:56:14.059712 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.060253 kubelet[3479]: E1112 20:56:14.060199 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.060955 kubelet[3479]: E1112 20:56:14.060926 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.061245 kubelet[3479]: W1112 20:56:14.061089 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.061493 kubelet[3479]: E1112 20:56:14.061367 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.062070 kubelet[3479]: E1112 20:56:14.061898 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.062070 kubelet[3479]: W1112 20:56:14.061928 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.062070 kubelet[3479]: E1112 20:56:14.062102 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.062912 kubelet[3479]: E1112 20:56:14.062760 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.062912 kubelet[3479]: W1112 20:56:14.062774 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.063524 kubelet[3479]: E1112 20:56:14.063212 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.063833 kubelet[3479]: E1112 20:56:14.063662 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.063833 kubelet[3479]: W1112 20:56:14.063673 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.064149 kubelet[3479]: E1112 20:56:14.063932 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.065010 kubelet[3479]: E1112 20:56:14.064889 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.065010 kubelet[3479]: W1112 20:56:14.064900 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.065010 kubelet[3479]: E1112 20:56:14.064980 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.065690 kubelet[3479]: E1112 20:56:14.065234 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.065690 kubelet[3479]: W1112 20:56:14.065263 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.065690 kubelet[3479]: E1112 20:56:14.065435 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.066455 kubelet[3479]: E1112 20:56:14.066380 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.066455 kubelet[3479]: W1112 20:56:14.066392 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.067021 kubelet[3479]: E1112 20:56:14.066748 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.067885 containerd[1782]: time="2024-11-12T20:56:14.067722328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:14.068152 containerd[1782]: time="2024-11-12T20:56:14.067867729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:14.068420 kubelet[3479]: E1112 20:56:14.068081 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.068420 kubelet[3479]: W1112 20:56:14.068114 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.068512 containerd[1782]: time="2024-11-12T20:56:14.068020231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:14.069016 kubelet[3479]: E1112 20:56:14.068717 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.069208 containerd[1782]: time="2024-11-12T20:56:14.068966139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:14.069745 kubelet[3479]: E1112 20:56:14.069682 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.069745 kubelet[3479]: W1112 20:56:14.069699 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.070279 kubelet[3479]: E1112 20:56:14.070130 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.070912 kubelet[3479]: E1112 20:56:14.070784 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.070912 kubelet[3479]: W1112 20:56:14.070798 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.071387 kubelet[3479]: E1112 20:56:14.071194 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.071871 kubelet[3479]: E1112 20:56:14.071736 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.071871 kubelet[3479]: W1112 20:56:14.071750 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.072278 kubelet[3479]: E1112 20:56:14.072013 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.073156 kubelet[3479]: E1112 20:56:14.073002 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.073156 kubelet[3479]: W1112 20:56:14.073018 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.073688 kubelet[3479]: E1112 20:56:14.073565 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.074394 kubelet[3479]: E1112 20:56:14.074073 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.074394 kubelet[3479]: W1112 20:56:14.074188 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.074394 kubelet[3479]: E1112 20:56:14.074361 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.075112 kubelet[3479]: E1112 20:56:14.074980 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.075112 kubelet[3479]: W1112 20:56:14.074994 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.075686 kubelet[3479]: E1112 20:56:14.075533 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.076115 kubelet[3479]: E1112 20:56:14.076102 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.076376 kubelet[3479]: W1112 20:56:14.076319 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.076791 kubelet[3479]: E1112 20:56:14.076645 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.077444 kubelet[3479]: E1112 20:56:14.077310 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.077444 kubelet[3479]: W1112 20:56:14.077324 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.077881 kubelet[3479]: E1112 20:56:14.077536 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.078585 kubelet[3479]: E1112 20:56:14.078371 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.078585 kubelet[3479]: W1112 20:56:14.078385 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.078585 kubelet[3479]: E1112 20:56:14.078425 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.079421 kubelet[3479]: E1112 20:56:14.079203 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.079421 kubelet[3479]: W1112 20:56:14.079220 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.079421 kubelet[3479]: E1112 20:56:14.079378 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.080668 kubelet[3479]: E1112 20:56:14.080324 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.080668 kubelet[3479]: W1112 20:56:14.080443 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.080668 kubelet[3479]: E1112 20:56:14.080465 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.082519 kubelet[3479]: E1112 20:56:14.082464 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.082519 kubelet[3479]: W1112 20:56:14.082479 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.082519 kubelet[3479]: E1112 20:56:14.082496 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.095897 kubelet[3479]: E1112 20:56:14.095881 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:14.096125 kubelet[3479]: W1112 20:56:14.095989 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:14.096125 kubelet[3479]: E1112 20:56:14.096012 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:14.099931 containerd[1782]: time="2024-11-12T20:56:14.099814625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bv4lt,Uid:a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:14.159895 containerd[1782]: time="2024-11-12T20:56:14.158717971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:14.159895 containerd[1782]: time="2024-11-12T20:56:14.158776671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:14.159895 containerd[1782]: time="2024-11-12T20:56:14.158799371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:14.159895 containerd[1782]: time="2024-11-12T20:56:14.158888072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:14.172515 containerd[1782]: time="2024-11-12T20:56:14.171947693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54b4f86c8d-jxr2n,Uid:bd6851b7-c1f8-49e7-b5bf-1a6abe6967b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"816e2b672577c79763d517432033c0a061b74445d06eadfc703e81b3062f4f2a\"" Nov 12 20:56:14.173873 containerd[1782]: time="2024-11-12T20:56:14.173834310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:56:14.209444 containerd[1782]: time="2024-11-12T20:56:14.209086537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bv4lt,Uid:a4a9ab9d-5f92-4bfc-bac6-8a38023ecd14,Namespace:calico-system,Attempt:0,} returns sandbox id \"ceb33e5081b422c93f5b969ba924293db5ba3e8912129e19cc9a88164a926237\"" Nov 12 20:56:16.023185 kubelet[3479]: E1112 20:56:16.022107 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rmfb" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" Nov 12 20:56:16.119115 containerd[1782]: time="2024-11-12T20:56:16.119025122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:16.121767 containerd[1782]: time="2024-11-12T20:56:16.121642346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:56:16.126773 containerd[1782]: time="2024-11-12T20:56:16.126632092Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:16.131133 containerd[1782]: time="2024-11-12T20:56:16.130990233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:16.133079 containerd[1782]: time="2024-11-12T20:56:16.131896041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 1.95801893s" Nov 12 20:56:16.133079 containerd[1782]: time="2024-11-12T20:56:16.131947741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:56:16.133432 containerd[1782]: time="2024-11-12T20:56:16.133402055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:56:16.149819 containerd[1782]: time="2024-11-12T20:56:16.149528204Z" level=info msg="CreateContainer within sandbox \"816e2b672577c79763d517432033c0a061b74445d06eadfc703e81b3062f4f2a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:56:16.215613 containerd[1782]: time="2024-11-12T20:56:16.215519015Z" level=info msg="CreateContainer within sandbox \"816e2b672577c79763d517432033c0a061b74445d06eadfc703e81b3062f4f2a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5e1c65cb5fc678e1f41c8e9a695a6b4e824abc14e4d38227a22079c6c67139dd\"" Nov 12 20:56:16.216482 containerd[1782]: time="2024-11-12T20:56:16.216452124Z" level=info msg="StartContainer for \"5e1c65cb5fc678e1f41c8e9a695a6b4e824abc14e4d38227a22079c6c67139dd\"" Nov 12 20:56:16.286321 containerd[1782]: time="2024-11-12T20:56:16.286213370Z" level=info msg="StartContainer for \"5e1c65cb5fc678e1f41c8e9a695a6b4e824abc14e4d38227a22079c6c67139dd\" returns successfully" Nov 12 20:56:17.138284 kubelet[3479]: I1112 20:56:17.138250 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-54b4f86c8d-jxr2n" podStartSLOduration=2.179457722 podStartE2EDuration="4.138193959s" podCreationTimestamp="2024-11-12 20:56:13 +0000 UTC" firstStartedPulling="2024-11-12 20:56:14.173546808 +0000 UTC m=+24.270704680" lastFinishedPulling="2024-11-12 20:56:16.132282945 +0000 UTC m=+26.229440917" observedRunningTime="2024-11-12 20:56:17.136680545 +0000 UTC m=+27.233838417" watchObservedRunningTime="2024-11-12 20:56:17.138193959 +0000 UTC m=+27.235351931" Nov 12 20:56:17.166501 kubelet[3479]: E1112 20:56:17.166470 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.166501 kubelet[3479]: W1112 20:56:17.166493 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.166823 kubelet[3479]: E1112 20:56:17.166519 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.166823 kubelet[3479]: E1112 20:56:17.166770 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.166823 kubelet[3479]: W1112 20:56:17.166785 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.166823 kubelet[3479]: E1112 20:56:17.166809 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.167237 kubelet[3479]: E1112 20:56:17.167027 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.167237 kubelet[3479]: W1112 20:56:17.167039 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.167237 kubelet[3479]: E1112 20:56:17.167057 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.167609 kubelet[3479]: E1112 20:56:17.167341 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.167609 kubelet[3479]: W1112 20:56:17.167354 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.167609 kubelet[3479]: E1112 20:56:17.167376 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.167815 kubelet[3479]: E1112 20:56:17.167638 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.167815 kubelet[3479]: W1112 20:56:17.167651 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.167815 kubelet[3479]: E1112 20:56:17.167669 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.168277 kubelet[3479]: E1112 20:56:17.167880 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.168277 kubelet[3479]: W1112 20:56:17.167890 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.168277 kubelet[3479]: E1112 20:56:17.167908 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.168277 kubelet[3479]: E1112 20:56:17.168109 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.168277 kubelet[3479]: W1112 20:56:17.168123 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.168277 kubelet[3479]: E1112 20:56:17.168142 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.168772 kubelet[3479]: E1112 20:56:17.168376 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.168772 kubelet[3479]: W1112 20:56:17.168387 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.168772 kubelet[3479]: E1112 20:56:17.168406 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.168772 kubelet[3479]: E1112 20:56:17.168628 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.168772 kubelet[3479]: W1112 20:56:17.168641 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.168772 kubelet[3479]: E1112 20:56:17.168658 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.169087 kubelet[3479]: E1112 20:56:17.168863 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.169087 kubelet[3479]: W1112 20:56:17.168874 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.169087 kubelet[3479]: E1112 20:56:17.168891 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.169087 kubelet[3479]: E1112 20:56:17.169081 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.169321 kubelet[3479]: W1112 20:56:17.169092 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.169321 kubelet[3479]: E1112 20:56:17.169108 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.169321 kubelet[3479]: E1112 20:56:17.169320 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.169492 kubelet[3479]: W1112 20:56:17.169331 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.169492 kubelet[3479]: E1112 20:56:17.169348 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.169595 kubelet[3479]: E1112 20:56:17.169559 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.169595 kubelet[3479]: W1112 20:56:17.169570 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.169595 kubelet[3479]: E1112 20:56:17.169586 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.169808 kubelet[3479]: E1112 20:56:17.169785 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.169808 kubelet[3479]: W1112 20:56:17.169801 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.169934 kubelet[3479]: E1112 20:56:17.169817 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.170054 kubelet[3479]: E1112 20:56:17.170037 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.170054 kubelet[3479]: W1112 20:56:17.170051 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.170156 kubelet[3479]: E1112 20:56:17.170072 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.183291 kubelet[3479]: E1112 20:56:17.183273 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.183291 kubelet[3479]: W1112 20:56:17.183286 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.183478 kubelet[3479]: E1112 20:56:17.183303 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.183563 kubelet[3479]: E1112 20:56:17.183522 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.183563 kubelet[3479]: W1112 20:56:17.183533 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.183563 kubelet[3479]: E1112 20:56:17.183554 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.183757 kubelet[3479]: E1112 20:56:17.183748 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.183819 kubelet[3479]: W1112 20:56:17.183758 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.183819 kubelet[3479]: E1112 20:56:17.183779 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.184081 kubelet[3479]: E1112 20:56:17.184064 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.184081 kubelet[3479]: W1112 20:56:17.184079 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.184444 kubelet[3479]: E1112 20:56:17.184100 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.184444 kubelet[3479]: E1112 20:56:17.184403 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.184444 kubelet[3479]: W1112 20:56:17.184415 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.184444 kubelet[3479]: E1112 20:56:17.184443 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.184703 kubelet[3479]: E1112 20:56:17.184628 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.184703 kubelet[3479]: W1112 20:56:17.184638 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.184703 kubelet[3479]: E1112 20:56:17.184665 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.184968 kubelet[3479]: E1112 20:56:17.184946 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.184968 kubelet[3479]: W1112 20:56:17.184961 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.185105 kubelet[3479]: E1112 20:56:17.185064 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.185333 kubelet[3479]: E1112 20:56:17.185282 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.185333 kubelet[3479]: W1112 20:56:17.185293 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.185431 kubelet[3479]: E1112 20:56:17.185381 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.185845 kubelet[3479]: E1112 20:56:17.185594 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.185845 kubelet[3479]: W1112 20:56:17.185607 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.185845 kubelet[3479]: E1112 20:56:17.185627 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.186119 kubelet[3479]: E1112 20:56:17.186105 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.186119 kubelet[3479]: W1112 20:56:17.186119 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.186244 kubelet[3479]: E1112 20:56:17.186149 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.186671 kubelet[3479]: E1112 20:56:17.186505 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.186671 kubelet[3479]: W1112 20:56:17.186518 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.186671 kubelet[3479]: E1112 20:56:17.186538 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.186918 kubelet[3479]: E1112 20:56:17.186859 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.186918 kubelet[3479]: W1112 20:56:17.186873 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.187056 kubelet[3479]: E1112 20:56:17.186943 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.187410 kubelet[3479]: E1112 20:56:17.187388 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.187410 kubelet[3479]: W1112 20:56:17.187404 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.187632 kubelet[3479]: E1112 20:56:17.187489 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.187690 kubelet[3479]: E1112 20:56:17.187642 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.187690 kubelet[3479]: W1112 20:56:17.187679 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.188009 kubelet[3479]: E1112 20:56:17.187777 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.188009 kubelet[3479]: E1112 20:56:17.187979 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.188009 kubelet[3479]: W1112 20:56:17.187991 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.190252 kubelet[3479]: E1112 20:56:17.188019 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.190252 kubelet[3479]: E1112 20:56:17.188311 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.190252 kubelet[3479]: W1112 20:56:17.188323 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.190252 kubelet[3479]: E1112 20:56:17.188339 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.190252 kubelet[3479]: E1112 20:56:17.188538 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.190252 kubelet[3479]: W1112 20:56:17.188549 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.190252 kubelet[3479]: E1112 20:56:17.188566 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.192489 kubelet[3479]: E1112 20:56:17.192461 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:17.192489 kubelet[3479]: W1112 20:56:17.192482 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:17.192680 kubelet[3479]: E1112 20:56:17.192500 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:17.348963 containerd[1782]: time="2024-11-12T20:56:17.348917610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:17.351028 containerd[1782]: time="2024-11-12T20:56:17.350969629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:56:17.355184 containerd[1782]: time="2024-11-12T20:56:17.355114967Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:17.359254 containerd[1782]: time="2024-11-12T20:56:17.359198105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:17.359989 containerd[1782]: time="2024-11-12T20:56:17.359876311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.226430356s" Nov 12 20:56:17.359989 containerd[1782]: time="2024-11-12T20:56:17.359913912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:56:17.362046 containerd[1782]: time="2024-11-12T20:56:17.361716028Z" level=info msg="CreateContainer within sandbox \"ceb33e5081b422c93f5b969ba924293db5ba3e8912129e19cc9a88164a926237\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:56:17.393430 containerd[1782]: time="2024-11-12T20:56:17.393347421Z" level=info msg="CreateContainer within sandbox \"ceb33e5081b422c93f5b969ba924293db5ba3e8912129e19cc9a88164a926237\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6ad10c7791cbfbec7643253901720b25b6c95612d07eb87556ece49a1df3f5da\"" Nov 12 20:56:17.395026 containerd[1782]: time="2024-11-12T20:56:17.393786125Z" level=info msg="StartContainer for \"6ad10c7791cbfbec7643253901720b25b6c95612d07eb87556ece49a1df3f5da\"" Nov 12 20:56:17.455786 containerd[1782]: time="2024-11-12T20:56:17.455750599Z" level=info msg="StartContainer for \"6ad10c7791cbfbec7643253901720b25b6c95612d07eb87556ece49a1df3f5da\" returns successfully" Nov 12 20:56:17.486598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ad10c7791cbfbec7643253901720b25b6c95612d07eb87556ece49a1df3f5da-rootfs.mount: Deactivated successfully. Nov 12 20:56:18.022438 kubelet[3479]: E1112 20:56:18.022013 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rmfb" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" Nov 12 20:56:18.851311 containerd[1782]: time="2024-11-12T20:56:18.851229620Z" level=info msg="shim disconnected" id=6ad10c7791cbfbec7643253901720b25b6c95612d07eb87556ece49a1df3f5da namespace=k8s.io Nov 12 20:56:18.851311 containerd[1782]: time="2024-11-12T20:56:18.851305721Z" level=warning msg="cleaning up after shim disconnected" id=6ad10c7791cbfbec7643253901720b25b6c95612d07eb87556ece49a1df3f5da namespace=k8s.io Nov 12 20:56:18.851311 containerd[1782]: time="2024-11-12T20:56:18.851317321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:19.133476 containerd[1782]: time="2024-11-12T20:56:19.132949029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:56:20.024996 kubelet[3479]: E1112 20:56:20.023248 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rmfb" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" Nov 12 20:56:22.023229 kubelet[3479]: E1112 20:56:22.023195 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rmfb" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" Nov 12 20:56:23.068956 containerd[1782]: time="2024-11-12T20:56:23.068905001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:23.071959 containerd[1782]: time="2024-11-12T20:56:23.071898627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:56:23.075424 containerd[1782]: time="2024-11-12T20:56:23.075373558Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:23.080769 containerd[1782]: time="2024-11-12T20:56:23.080714404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:23.081554 containerd[1782]: time="2024-11-12T20:56:23.081428510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 3.948422881s" Nov 12 20:56:23.081554 containerd[1782]: time="2024-11-12T20:56:23.081465211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:56:23.083788 containerd[1782]: time="2024-11-12T20:56:23.083761231Z" level=info msg="CreateContainer within sandbox \"ceb33e5081b422c93f5b969ba924293db5ba3e8912129e19cc9a88164a926237\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:56:23.123926 containerd[1782]: time="2024-11-12T20:56:23.123893280Z" level=info msg="CreateContainer within sandbox \"ceb33e5081b422c93f5b969ba924293db5ba3e8912129e19cc9a88164a926237\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aea5e0b95c62df3f6912ee5f58e83475b4909465617d79692c7d33256b9c8330\"" Nov 12 20:56:23.125264 containerd[1782]: time="2024-11-12T20:56:23.124298984Z" level=info msg="StartContainer for \"aea5e0b95c62df3f6912ee5f58e83475b4909465617d79692c7d33256b9c8330\"" Nov 12 20:56:23.193964 containerd[1782]: time="2024-11-12T20:56:23.193914690Z" level=info msg="StartContainer for \"aea5e0b95c62df3f6912ee5f58e83475b4909465617d79692c7d33256b9c8330\" returns successfully" Nov 12 20:56:24.023309 kubelet[3479]: E1112 20:56:24.022003 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rmfb" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" Nov 12 20:56:24.592091 containerd[1782]: time="2024-11-12T20:56:24.592044064Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:56:24.617622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aea5e0b95c62df3f6912ee5f58e83475b4909465617d79692c7d33256b9c8330-rootfs.mount: Deactivated successfully. Nov 12 20:56:24.667292 kubelet[3479]: I1112 20:56:24.665785 3479 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:56:24.697001 kubelet[3479]: I1112 20:56:24.696970 3479 topology_manager.go:215] "Topology Admit Handler" podUID="e4125a79-bb3b-439b-8dfa-c76cc22a17a7" podNamespace="kube-system" podName="coredns-76f75df574-p948p" Nov 12 20:56:24.712617 kubelet[3479]: I1112 20:56:24.707263 3479 topology_manager.go:215] "Topology Admit Handler" podUID="4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5" podNamespace="kube-system" podName="coredns-76f75df574-9vlxm" Nov 12 20:56:24.712617 kubelet[3479]: I1112 20:56:24.707619 3479 topology_manager.go:215] "Topology Admit Handler" podUID="618b1aa4-6bea-46a5-a0d9-a90b9001122c" podNamespace="calico-system" podName="calico-kube-controllers-7cc8897bfb-42hq9" Nov 12 20:56:24.712617 kubelet[3479]: I1112 20:56:24.708497 3479 topology_manager.go:215] "Topology Admit Handler" podUID="54c74a88-f218-4ecd-bff2-da8a0009d8be" podNamespace="calico-apiserver" podName="calico-apiserver-7774cd9f88-p4f2v" Nov 12 20:56:24.715131 kubelet[3479]: I1112 20:56:24.713684 3479 topology_manager.go:215] "Topology Admit Handler" podUID="c6403310-33ca-4d11-933f-4e5fed185033" podNamespace="calico-apiserver" podName="calico-apiserver-7774cd9f88-w6ktj" Nov 12 20:56:24.733877 kubelet[3479]: I1112 20:56:24.733834 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rj7\" (UniqueName: \"kubernetes.io/projected/618b1aa4-6bea-46a5-a0d9-a90b9001122c-kube-api-access-64rj7\") pod \"calico-kube-controllers-7cc8897bfb-42hq9\" (UID: \"618b1aa4-6bea-46a5-a0d9-a90b9001122c\") " pod="calico-system/calico-kube-controllers-7cc8897bfb-42hq9" Nov 12 20:56:24.733877 kubelet[3479]: I1112 20:56:24.733877 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c6403310-33ca-4d11-933f-4e5fed185033-calico-apiserver-certs\") pod \"calico-apiserver-7774cd9f88-w6ktj\" (UID: \"c6403310-33ca-4d11-933f-4e5fed185033\") " pod="calico-apiserver/calico-apiserver-7774cd9f88-w6ktj" Nov 12 20:56:24.734212 kubelet[3479]: I1112 20:56:24.733907 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxhn\" (UniqueName: \"kubernetes.io/projected/54c74a88-f218-4ecd-bff2-da8a0009d8be-kube-api-access-fqxhn\") pod \"calico-apiserver-7774cd9f88-p4f2v\" (UID: \"54c74a88-f218-4ecd-bff2-da8a0009d8be\") " pod="calico-apiserver/calico-apiserver-7774cd9f88-p4f2v" Nov 12 20:56:24.734212 kubelet[3479]: I1112 20:56:24.733940 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/54c74a88-f218-4ecd-bff2-da8a0009d8be-calico-apiserver-certs\") pod \"calico-apiserver-7774cd9f88-p4f2v\" (UID: \"54c74a88-f218-4ecd-bff2-da8a0009d8be\") " pod="calico-apiserver/calico-apiserver-7774cd9f88-p4f2v" Nov 12 20:56:24.734212 kubelet[3479]: I1112 20:56:24.733971 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/618b1aa4-6bea-46a5-a0d9-a90b9001122c-tigera-ca-bundle\") pod \"calico-kube-controllers-7cc8897bfb-42hq9\" (UID: \"618b1aa4-6bea-46a5-a0d9-a90b9001122c\") " pod="calico-system/calico-kube-controllers-7cc8897bfb-42hq9" Nov 12 20:56:24.734212 kubelet[3479]: I1112 20:56:24.734063 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5-config-volume\") pod \"coredns-76f75df574-9vlxm\" (UID: \"4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5\") " pod="kube-system/coredns-76f75df574-9vlxm" Nov 12 20:56:24.734212 kubelet[3479]: I1112 20:56:24.734140 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4125a79-bb3b-439b-8dfa-c76cc22a17a7-config-volume\") pod \"coredns-76f75df574-p948p\" (UID: \"e4125a79-bb3b-439b-8dfa-c76cc22a17a7\") " pod="kube-system/coredns-76f75df574-p948p" Nov 12 20:56:24.734627 kubelet[3479]: I1112 20:56:24.734505 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47th\" (UniqueName: \"kubernetes.io/projected/c6403310-33ca-4d11-933f-4e5fed185033-kube-api-access-c47th\") pod \"calico-apiserver-7774cd9f88-w6ktj\" (UID: \"c6403310-33ca-4d11-933f-4e5fed185033\") " pod="calico-apiserver/calico-apiserver-7774cd9f88-w6ktj" Nov 12 20:56:24.734627 kubelet[3479]: I1112 20:56:24.734561 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxnf\" (UniqueName: \"kubernetes.io/projected/e4125a79-bb3b-439b-8dfa-c76cc22a17a7-kube-api-access-8dxnf\") pod \"coredns-76f75df574-p948p\" (UID: \"e4125a79-bb3b-439b-8dfa-c76cc22a17a7\") " pod="kube-system/coredns-76f75df574-p948p" Nov 12 20:56:24.734627 kubelet[3479]: I1112 20:56:24.734607 3479 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8bh\" (UniqueName: \"kubernetes.io/projected/4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5-kube-api-access-2p8bh\") pod \"coredns-76f75df574-9vlxm\" (UID: \"4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5\") " pod="kube-system/coredns-76f75df574-9vlxm" Nov 12 20:56:25.002736 containerd[1782]: time="2024-11-12T20:56:25.002183436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p948p,Uid:e4125a79-bb3b-439b-8dfa-c76cc22a17a7,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:25.015267 containerd[1782]: time="2024-11-12T20:56:25.015233349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774cd9f88-p4f2v,Uid:54c74a88-f218-4ecd-bff2-da8a0009d8be,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:56:25.015500 containerd[1782]: time="2024-11-12T20:56:25.015473951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9vlxm,Uid:4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:25.022418 containerd[1782]: time="2024-11-12T20:56:25.022378712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cc8897bfb-42hq9,Uid:618b1aa4-6bea-46a5-a0d9-a90b9001122c,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:25.026426 containerd[1782]: time="2024-11-12T20:56:25.026390347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774cd9f88-w6ktj,Uid:c6403310-33ca-4d11-933f-4e5fed185033,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:56:26.244500 containerd[1782]: time="2024-11-12T20:56:26.243753947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rmfb,Uid:6ef4534f-dec6-4d07-bd02-f445b758fa12,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:26.260758 containerd[1782]: time="2024-11-12T20:56:26.260698794Z" level=info msg="shim disconnected" id=aea5e0b95c62df3f6912ee5f58e83475b4909465617d79692c7d33256b9c8330 namespace=k8s.io Nov 12 20:56:26.260894 containerd[1782]: time="2024-11-12T20:56:26.260760595Z" level=warning msg="cleaning up after shim disconnected" id=aea5e0b95c62df3f6912ee5f58e83475b4909465617d79692c7d33256b9c8330 namespace=k8s.io Nov 12 20:56:26.260894 containerd[1782]: time="2024-11-12T20:56:26.260772095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:26.585359 containerd[1782]: time="2024-11-12T20:56:26.585276021Z" level=error msg="Failed to destroy network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.585682 containerd[1782]: time="2024-11-12T20:56:26.585640924Z" level=error msg="encountered an error cleaning up failed sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.585780 containerd[1782]: time="2024-11-12T20:56:26.585710725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774cd9f88-p4f2v,Uid:54c74a88-f218-4ecd-bff2-da8a0009d8be,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.586295 kubelet[3479]: E1112 20:56:26.586263 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.588592 kubelet[3479]: E1112 20:56:26.586486 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7774cd9f88-p4f2v" Nov 12 20:56:26.588592 kubelet[3479]: E1112 20:56:26.586539 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7774cd9f88-p4f2v" Nov 12 20:56:26.588592 kubelet[3479]: E1112 20:56:26.588060 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7774cd9f88-p4f2v_calico-apiserver(54c74a88-f218-4ecd-bff2-da8a0009d8be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7774cd9f88-p4f2v_calico-apiserver(54c74a88-f218-4ecd-bff2-da8a0009d8be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7774cd9f88-p4f2v" podUID="54c74a88-f218-4ecd-bff2-da8a0009d8be" Nov 12 20:56:26.623421 containerd[1782]: time="2024-11-12T20:56:26.623371752Z" level=error msg="Failed to destroy network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.623769 containerd[1782]: time="2024-11-12T20:56:26.623729356Z" level=error msg="encountered an error cleaning up failed sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.623851 containerd[1782]: time="2024-11-12T20:56:26.623797556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p948p,Uid:e4125a79-bb3b-439b-8dfa-c76cc22a17a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.624049 kubelet[3479]: E1112 20:56:26.624029 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.624571 kubelet[3479]: E1112 20:56:26.624216 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-p948p" Nov 12 20:56:26.624571 kubelet[3479]: E1112 20:56:26.624252 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-p948p" Nov 12 20:56:26.624571 kubelet[3479]: E1112 20:56:26.624319 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-p948p_kube-system(e4125a79-bb3b-439b-8dfa-c76cc22a17a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-p948p_kube-system(e4125a79-bb3b-439b-8dfa-c76cc22a17a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-p948p" podUID="e4125a79-bb3b-439b-8dfa-c76cc22a17a7" Nov 12 20:56:26.629098 containerd[1782]: time="2024-11-12T20:56:26.628957201Z" level=error msg="Failed to destroy network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.632146 containerd[1782]: time="2024-11-12T20:56:26.629436405Z" level=error msg="encountered an error cleaning up failed sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.632146 containerd[1782]: time="2024-11-12T20:56:26.629492806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9vlxm,Uid:4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.632420 kubelet[3479]: E1112 20:56:26.630448 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.632420 kubelet[3479]: E1112 20:56:26.630515 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9vlxm" Nov 12 20:56:26.632420 kubelet[3479]: E1112 20:56:26.630550 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9vlxm" Nov 12 20:56:26.632574 kubelet[3479]: E1112 20:56:26.630606 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9vlxm_kube-system(4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9vlxm_kube-system(4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9vlxm" podUID="4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5" Nov 12 20:56:26.634432 containerd[1782]: time="2024-11-12T20:56:26.633591941Z" level=error msg="Failed to destroy network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.635153 containerd[1782]: time="2024-11-12T20:56:26.635119755Z" level=error msg="encountered an error cleaning up failed sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.635407 containerd[1782]: time="2024-11-12T20:56:26.635374657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774cd9f88-w6ktj,Uid:c6403310-33ca-4d11-933f-4e5fed185033,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.635959 kubelet[3479]: E1112 20:56:26.635811 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.635959 kubelet[3479]: E1112 20:56:26.635854 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7774cd9f88-w6ktj" Nov 12 20:56:26.635959 kubelet[3479]: E1112 20:56:26.635879 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7774cd9f88-w6ktj" Nov 12 20:56:26.636126 kubelet[3479]: E1112 20:56:26.635933 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7774cd9f88-w6ktj_calico-apiserver(c6403310-33ca-4d11-933f-4e5fed185033)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7774cd9f88-w6ktj_calico-apiserver(c6403310-33ca-4d11-933f-4e5fed185033)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7774cd9f88-w6ktj" podUID="c6403310-33ca-4d11-933f-4e5fed185033" Nov 12 20:56:26.640616 containerd[1782]: time="2024-11-12T20:56:26.640203999Z" level=error msg="Failed to destroy network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.640616 containerd[1782]: time="2024-11-12T20:56:26.640500302Z" level=error msg="encountered an error cleaning up failed sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.640616 containerd[1782]: time="2024-11-12T20:56:26.640543902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cc8897bfb-42hq9,Uid:618b1aa4-6bea-46a5-a0d9-a90b9001122c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.641135 kubelet[3479]: E1112 20:56:26.640915 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.641135 kubelet[3479]: E1112 20:56:26.640995 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cc8897bfb-42hq9" Nov 12 20:56:26.641135 kubelet[3479]: E1112 20:56:26.641024 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cc8897bfb-42hq9" Nov 12 20:56:26.641328 kubelet[3479]: E1112 20:56:26.641098 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cc8897bfb-42hq9_calico-system(618b1aa4-6bea-46a5-a0d9-a90b9001122c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cc8897bfb-42hq9_calico-system(618b1aa4-6bea-46a5-a0d9-a90b9001122c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cc8897bfb-42hq9" podUID="618b1aa4-6bea-46a5-a0d9-a90b9001122c" Nov 12 20:56:26.645035 containerd[1782]: time="2024-11-12T20:56:26.645005341Z" level=error msg="Failed to destroy network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.645583 containerd[1782]: time="2024-11-12T20:56:26.645445645Z" level=error msg="encountered an error cleaning up failed sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.645583 containerd[1782]: time="2024-11-12T20:56:26.645522845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rmfb,Uid:6ef4534f-dec6-4d07-bd02-f445b758fa12,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.645792 kubelet[3479]: E1112 20:56:26.645766 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:26.645879 kubelet[3479]: E1112 20:56:26.645801 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6rmfb" Nov 12 20:56:26.645879 kubelet[3479]: E1112 20:56:26.645830 3479 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6rmfb" Nov 12 20:56:26.645879 kubelet[3479]: E1112 20:56:26.645878 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6rmfb_calico-system(6ef4534f-dec6-4d07-bd02-f445b758fa12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6rmfb_calico-system(6ef4534f-dec6-4d07-bd02-f445b758fa12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rmfb" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" Nov 12 20:56:27.154353 kubelet[3479]: I1112 20:56:27.154318 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:27.155763 containerd[1782]: time="2024-11-12T20:56:27.155070582Z" level=info msg="StopPodSandbox for \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\"" Nov 12 20:56:27.155763 containerd[1782]: time="2024-11-12T20:56:27.155303184Z" level=info msg="Ensure that sandbox 50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c in task-service has been cleanup successfully" Nov 12 20:56:27.157627 kubelet[3479]: I1112 20:56:27.157227 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:27.158191 containerd[1782]: time="2024-11-12T20:56:27.157880307Z" level=info msg="StopPodSandbox for \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\"" Nov 12 20:56:27.158191 containerd[1782]: time="2024-11-12T20:56:27.158072108Z" level=info msg="Ensure that sandbox a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d in task-service has been cleanup successfully" Nov 12 20:56:27.161440 kubelet[3479]: I1112 20:56:27.161419 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:27.168207 containerd[1782]: time="2024-11-12T20:56:27.168145996Z" level=info msg="StopPodSandbox for \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\"" Nov 12 20:56:27.168638 containerd[1782]: time="2024-11-12T20:56:27.168518099Z" level=info msg="Ensure that sandbox 6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1 in task-service has been cleanup successfully" Nov 12 20:56:27.172536 kubelet[3479]: I1112 20:56:27.172514 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:27.175180 containerd[1782]: time="2024-11-12T20:56:27.174181449Z" level=info msg="StopPodSandbox for \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\"" Nov 12 20:56:27.175180 containerd[1782]: time="2024-11-12T20:56:27.174344750Z" level=info msg="Ensure that sandbox ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7 in task-service has been cleanup successfully" Nov 12 20:56:27.179197 kubelet[3479]: I1112 20:56:27.178936 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:27.181403 containerd[1782]: time="2024-11-12T20:56:27.181377311Z" level=info msg="StopPodSandbox for \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\"" Nov 12 20:56:27.182934 kubelet[3479]: I1112 20:56:27.182915 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:27.183940 containerd[1782]: time="2024-11-12T20:56:27.183911033Z" level=info msg="Ensure that sandbox e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f in task-service has been cleanup successfully" Nov 12 20:56:27.186266 containerd[1782]: time="2024-11-12T20:56:27.186009252Z" level=info msg="StopPodSandbox for \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\"" Nov 12 20:56:27.186266 containerd[1782]: time="2024-11-12T20:56:27.186222154Z" level=info msg="Ensure that sandbox 5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7 in task-service has been cleanup successfully" Nov 12 20:56:27.202781 containerd[1782]: time="2024-11-12T20:56:27.197823955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:56:27.264363 containerd[1782]: time="2024-11-12T20:56:27.264316834Z" level=error msg="StopPodSandbox for \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\" failed" error="failed to destroy network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:27.265263 kubelet[3479]: E1112 20:56:27.265035 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:27.265263 kubelet[3479]: E1112 20:56:27.265123 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1"} Nov 12 20:56:27.265263 kubelet[3479]: E1112 20:56:27.265197 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6ef4534f-dec6-4d07-bd02-f445b758fa12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:27.265574 kubelet[3479]: E1112 20:56:27.265543 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6ef4534f-dec6-4d07-bd02-f445b758fa12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rmfb" podUID="6ef4534f-dec6-4d07-bd02-f445b758fa12" Nov 12 20:56:27.275911 containerd[1782]: time="2024-11-12T20:56:27.275870034Z" level=error msg="StopPodSandbox for \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\" failed" error="failed to destroy network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:27.276266 kubelet[3479]: E1112 20:56:27.276247 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:27.276403 kubelet[3479]: E1112 20:56:27.276391 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7"} Nov 12 20:56:27.276524 kubelet[3479]: E1112 20:56:27.276514 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:27.276681 kubelet[3479]: E1112 20:56:27.276664 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9vlxm" podUID="4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5" Nov 12 20:56:27.284300 containerd[1782]: time="2024-11-12T20:56:27.284253307Z" level=error msg="StopPodSandbox for \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\" failed" error="failed to destroy network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:27.284466 kubelet[3479]: E1112 20:56:27.284447 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:27.284552 kubelet[3479]: E1112 20:56:27.284486 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d"} Nov 12 20:56:27.284552 kubelet[3479]: E1112 20:56:27.284527 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4125a79-bb3b-439b-8dfa-c76cc22a17a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:27.284717 kubelet[3479]: E1112 20:56:27.284574 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4125a79-bb3b-439b-8dfa-c76cc22a17a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-p948p" podUID="e4125a79-bb3b-439b-8dfa-c76cc22a17a7" Nov 12 20:56:27.287024 containerd[1782]: time="2024-11-12T20:56:27.286987331Z" level=error msg="StopPodSandbox for \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\" failed" error="failed to destroy network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:27.288026 kubelet[3479]: E1112 20:56:27.287881 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:27.288026 kubelet[3479]: E1112 20:56:27.287923 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c"} Nov 12 20:56:27.288026 kubelet[3479]: E1112 20:56:27.287969 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6403310-33ca-4d11-933f-4e5fed185033\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:27.288026 kubelet[3479]: E1112 20:56:27.288004 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6403310-33ca-4d11-933f-4e5fed185033\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7774cd9f88-w6ktj" podUID="c6403310-33ca-4d11-933f-4e5fed185033" Nov 12 20:56:27.296121 containerd[1782]: time="2024-11-12T20:56:27.296089910Z" level=error msg="StopPodSandbox for \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\" failed" error="failed to destroy network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:27.296373 kubelet[3479]: E1112 20:56:27.296250 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:27.296373 kubelet[3479]: E1112 20:56:27.296281 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7"} Nov 12 20:56:27.296373 kubelet[3479]: E1112 20:56:27.296321 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"618b1aa4-6bea-46a5-a0d9-a90b9001122c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:27.296373 kubelet[3479]: E1112 20:56:27.296358 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"618b1aa4-6bea-46a5-a0d9-a90b9001122c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cc8897bfb-42hq9" podUID="618b1aa4-6bea-46a5-a0d9-a90b9001122c" Nov 12 20:56:27.302175 containerd[1782]: time="2024-11-12T20:56:27.302128363Z" level=error msg="StopPodSandbox for \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\" failed" error="failed to destroy network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:27.302334 kubelet[3479]: E1112 20:56:27.302310 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:27.302410 kubelet[3479]: E1112 20:56:27.302338 3479 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f"} Nov 12 20:56:27.302410 kubelet[3479]: E1112 20:56:27.302376 3479 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54c74a88-f218-4ecd-bff2-da8a0009d8be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:27.302503 kubelet[3479]: E1112 20:56:27.302415 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54c74a88-f218-4ecd-bff2-da8a0009d8be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7774cd9f88-p4f2v" podUID="54c74a88-f218-4ecd-bff2-da8a0009d8be" Nov 12 20:56:27.381084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7-shm.mount: Deactivated successfully. Nov 12 20:56:27.381338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d-shm.mount: Deactivated successfully. Nov 12 20:56:27.381512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f-shm.mount: Deactivated successfully. Nov 12 20:56:32.883548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793055463.mount: Deactivated successfully. Nov 12 20:56:32.934034 containerd[1782]: time="2024-11-12T20:56:32.933987831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:32.936691 containerd[1782]: time="2024-11-12T20:56:32.936649555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:56:32.940098 containerd[1782]: time="2024-11-12T20:56:32.940068987Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:32.944339 containerd[1782]: time="2024-11-12T20:56:32.944286025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:32.945025 containerd[1782]: time="2024-11-12T20:56:32.944869131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 5.747006775s" Nov 12 20:56:32.945025 containerd[1782]: time="2024-11-12T20:56:32.944924831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:56:32.953770 containerd[1782]: time="2024-11-12T20:56:32.953623911Z" level=info msg="CreateContainer within sandbox \"ceb33e5081b422c93f5b969ba924293db5ba3e8912129e19cc9a88164a926237\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:56:33.004588 containerd[1782]: time="2024-11-12T20:56:33.004489476Z" level=info msg="CreateContainer within sandbox \"ceb33e5081b422c93f5b969ba924293db5ba3e8912129e19cc9a88164a926237\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8a5e2e415ccad4b02a3a9b6f09f9b4ca0dd6e0cbba94b415a04d9952053dc2bf\"" Nov 12 20:56:33.005441 containerd[1782]: time="2024-11-12T20:56:33.005345083Z" level=info msg="StartContainer for \"8a5e2e415ccad4b02a3a9b6f09f9b4ca0dd6e0cbba94b415a04d9952053dc2bf\"" Nov 12 20:56:33.071568 containerd[1782]: time="2024-11-12T20:56:33.071350887Z" level=info msg="StartContainer for \"8a5e2e415ccad4b02a3a9b6f09f9b4ca0dd6e0cbba94b415a04d9952053dc2bf\" returns successfully" Nov 12 20:56:33.229244 kubelet[3479]: I1112 20:56:33.229133 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-bv4lt" podStartSLOduration=1.493773539 podStartE2EDuration="20.229095029s" podCreationTimestamp="2024-11-12 20:56:13 +0000 UTC" firstStartedPulling="2024-11-12 20:56:14.210093246 +0000 UTC m=+24.307251118" lastFinishedPulling="2024-11-12 20:56:32.945414636 +0000 UTC m=+43.042572608" observedRunningTime="2024-11-12 20:56:33.225753998 +0000 UTC m=+43.322911970" watchObservedRunningTime="2024-11-12 20:56:33.229095029 +0000 UTC m=+43.326252901" Nov 12 20:56:33.353571 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:56:33.353690 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:56:34.985193 kernel: bpftool[4670]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:56:35.257145 systemd-networkd[1366]: vxlan.calico: Link UP Nov 12 20:56:35.257171 systemd-networkd[1366]: vxlan.calico: Gained carrier Nov 12 20:56:37.204541 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Nov 12 20:56:39.023669 containerd[1782]: time="2024-11-12T20:56:39.023316301Z" level=info msg="StopPodSandbox for \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\"" Nov 12 20:56:39.024205 containerd[1782]: time="2024-11-12T20:56:39.024146407Z" level=info msg="StopPodSandbox for \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\"" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.089 [INFO][4796] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.090 [INFO][4796] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" iface="eth0" netns="/var/run/netns/cni-ddd0ec8d-501b-610e-ab3d-7df806c034a7" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.090 [INFO][4796] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" iface="eth0" netns="/var/run/netns/cni-ddd0ec8d-501b-610e-ab3d-7df806c034a7" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.091 [INFO][4796] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" iface="eth0" netns="/var/run/netns/cni-ddd0ec8d-501b-610e-ab3d-7df806c034a7" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.091 [INFO][4796] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.091 [INFO][4796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.118 [INFO][4808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.118 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.118 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.125 [WARNING][4808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.125 [INFO][4808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.127 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:39.131783 containerd[1782]: 2024-11-12 20:56:39.129 [INFO][4796] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:39.132904 containerd[1782]: time="2024-11-12T20:56:39.132282107Z" level=info msg="TearDown network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\" successfully" Nov 12 20:56:39.132904 containerd[1782]: time="2024-11-12T20:56:39.132326207Z" level=info msg="StopPodSandbox for \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\" returns successfully" Nov 12 20:56:39.138351 containerd[1782]: time="2024-11-12T20:56:39.136532742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p948p,Uid:e4125a79-bb3b-439b-8dfa-c76cc22a17a7,Namespace:kube-system,Attempt:1,}" Nov 12 20:56:39.138247 systemd[1]: run-netns-cni\x2dddd0ec8d\x2d501b\x2d610e\x2dab3d\x2d7df806c034a7.mount: Deactivated successfully. Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.087 [INFO][4791] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.087 [INFO][4791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" iface="eth0" netns="/var/run/netns/cni-254ebd42-9110-726a-eb9b-47450879c8bc" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.087 [INFO][4791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" iface="eth0" netns="/var/run/netns/cni-254ebd42-9110-726a-eb9b-47450879c8bc" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.087 [INFO][4791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" iface="eth0" netns="/var/run/netns/cni-254ebd42-9110-726a-eb9b-47450879c8bc" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.087 [INFO][4791] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.087 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.123 [INFO][4807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.124 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.127 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.135 [WARNING][4807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.135 [INFO][4807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.140 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:39.142583 containerd[1782]: 2024-11-12 20:56:39.141 [INFO][4791] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:39.145217 containerd[1782]: time="2024-11-12T20:56:39.142727994Z" level=info msg="TearDown network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\" successfully" Nov 12 20:56:39.145217 containerd[1782]: time="2024-11-12T20:56:39.142750394Z" level=info msg="StopPodSandbox for \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\" returns successfully" Nov 12 20:56:39.145217 containerd[1782]: time="2024-11-12T20:56:39.144008604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774cd9f88-w6ktj,Uid:c6403310-33ca-4d11-933f-4e5fed185033,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:56:39.147694 systemd[1]: run-netns-cni\x2d254ebd42\x2d9110\x2d726a\x2deb9b\x2d47450879c8bc.mount: Deactivated successfully. Nov 12 20:56:39.343142 systemd-networkd[1366]: calif50c64650df: Link UP Nov 12 20:56:39.347318 systemd-networkd[1366]: calif50c64650df: Gained carrier Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.256 [INFO][4819] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0 coredns-76f75df574- kube-system e4125a79-bb3b-439b-8dfa-c76cc22a17a7 757 0 2024-11-12 20:56:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-a-1543c8d709 coredns-76f75df574-p948p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif50c64650df [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Namespace="kube-system" Pod="coredns-76f75df574-p948p" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.256 [INFO][4819] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Namespace="kube-system" Pod="coredns-76f75df574-p948p" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.297 [INFO][4841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" HandleID="k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.307 [INFO][4841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" HandleID="k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bcbb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-a-1543c8d709", "pod":"coredns-76f75df574-p948p", "timestamp":"2024-11-12 20:56:39.297788683 +0000 UTC"}, Hostname:"ci-4081.2.0-a-1543c8d709", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.307 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.307 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.307 [INFO][4841] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-1543c8d709' Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.309 [INFO][4841] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.314 [INFO][4841] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.317 [INFO][4841] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.319 [INFO][4841] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.321 [INFO][4841] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.321 [INFO][4841] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.322 [INFO][4841] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.326 [INFO][4841] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.334 [INFO][4841] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.193/26] block=192.168.0.192/26 handle="k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.334 [INFO][4841] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.193/26] handle="k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.334 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:39.371729 containerd[1782]: 2024-11-12 20:56:39.334 [INFO][4841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.193/26] IPv6=[] ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" HandleID="k8s-pod-network.3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.373387 containerd[1782]: 2024-11-12 20:56:39.337 [INFO][4819] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Namespace="kube-system" Pod="coredns-76f75df574-p948p" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e4125a79-bb3b-439b-8dfa-c76cc22a17a7", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"", Pod:"coredns-76f75df574-p948p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif50c64650df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:39.373387 containerd[1782]: 2024-11-12 20:56:39.337 [INFO][4819] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.193/32] ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Namespace="kube-system" Pod="coredns-76f75df574-p948p" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.373387 containerd[1782]: 2024-11-12 20:56:39.337 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif50c64650df ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Namespace="kube-system" Pod="coredns-76f75df574-p948p" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.373387 containerd[1782]: 2024-11-12 20:56:39.347 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Namespace="kube-system" Pod="coredns-76f75df574-p948p" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.373387 containerd[1782]: 2024-11-12 20:56:39.348 [INFO][4819] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Namespace="kube-system" Pod="coredns-76f75df574-p948p" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e4125a79-bb3b-439b-8dfa-c76cc22a17a7", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb", Pod:"coredns-76f75df574-p948p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif50c64650df", MAC:"62:6f:a6:b6:cd:2b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:39.373387 containerd[1782]: 2024-11-12 20:56:39.367 [INFO][4819] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb" Namespace="kube-system" Pod="coredns-76f75df574-p948p" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:39.397649 systemd-networkd[1366]: cali74ac894f678: Link UP Nov 12 20:56:39.398748 systemd-networkd[1366]: cali74ac894f678: Gained carrier Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.256 [INFO][4827] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0 calico-apiserver-7774cd9f88- calico-apiserver c6403310-33ca-4d11-933f-4e5fed185033 756 0 2024-11-12 20:56:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7774cd9f88 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-a-1543c8d709 calico-apiserver-7774cd9f88-w6ktj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali74ac894f678 [] []}} ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-w6ktj" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.257 [INFO][4827] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-w6ktj" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.294 [INFO][4845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" HandleID="k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.308 [INFO][4845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" HandleID="k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318e60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-a-1543c8d709", "pod":"calico-apiserver-7774cd9f88-w6ktj", "timestamp":"2024-11-12 20:56:39.294299154 +0000 UTC"}, Hostname:"ci-4081.2.0-a-1543c8d709", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.308 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.334 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.334 [INFO][4845] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-1543c8d709' Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.337 [INFO][4845] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.347 [INFO][4845] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.354 [INFO][4845] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.357 [INFO][4845] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.368 [INFO][4845] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.368 [INFO][4845] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.372 [INFO][4845] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.380 [INFO][4845] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.390 [INFO][4845] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.194/26] block=192.168.0.192/26 handle="k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.390 [INFO][4845] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.194/26] handle="k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.390 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:39.419832 containerd[1782]: 2024-11-12 20:56:39.390 [INFO][4845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.194/26] IPv6=[] ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" HandleID="k8s-pod-network.7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.421932 containerd[1782]: 2024-11-12 20:56:39.392 [INFO][4827] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-w6ktj" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0", GenerateName:"calico-apiserver-7774cd9f88-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6403310-33ca-4d11-933f-4e5fed185033", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774cd9f88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"", Pod:"calico-apiserver-7774cd9f88-w6ktj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74ac894f678", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:39.421932 containerd[1782]: 2024-11-12 20:56:39.392 [INFO][4827] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.194/32] ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-w6ktj" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.421932 containerd[1782]: 2024-11-12 20:56:39.392 [INFO][4827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74ac894f678 ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-w6ktj" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.421932 containerd[1782]: 2024-11-12 20:56:39.399 [INFO][4827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-w6ktj" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.421932 containerd[1782]: 2024-11-12 20:56:39.400 [INFO][4827] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-w6ktj" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0", GenerateName:"calico-apiserver-7774cd9f88-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6403310-33ca-4d11-933f-4e5fed185033", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774cd9f88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc", Pod:"calico-apiserver-7774cd9f88-w6ktj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74ac894f678", MAC:"1e:d1:e9:24:59:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:39.421932 containerd[1782]: 2024-11-12 20:56:39.415 [INFO][4827] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-w6ktj" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:39.425965 containerd[1782]: time="2024-11-12T20:56:39.425497845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:39.425965 containerd[1782]: time="2024-11-12T20:56:39.425557446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:39.425965 containerd[1782]: time="2024-11-12T20:56:39.425574246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:39.425965 containerd[1782]: time="2024-11-12T20:56:39.425666547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:39.468488 containerd[1782]: time="2024-11-12T20:56:39.468390802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:39.468618 containerd[1782]: time="2024-11-12T20:56:39.468511003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:39.468618 containerd[1782]: time="2024-11-12T20:56:39.468528903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:39.470178 containerd[1782]: time="2024-11-12T20:56:39.468708205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:39.510677 containerd[1782]: time="2024-11-12T20:56:39.510634253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p948p,Uid:e4125a79-bb3b-439b-8dfa-c76cc22a17a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb\"" Nov 12 20:56:39.514072 containerd[1782]: time="2024-11-12T20:56:39.514045182Z" level=info msg="CreateContainer within sandbox \"3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:56:39.535604 containerd[1782]: time="2024-11-12T20:56:39.535583161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774cd9f88-w6ktj,Uid:c6403310-33ca-4d11-933f-4e5fed185033,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc\"" Nov 12 20:56:39.540785 containerd[1782]: time="2024-11-12T20:56:39.536818571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:56:39.561134 containerd[1782]: time="2024-11-12T20:56:39.561049073Z" level=info msg="CreateContainer within sandbox \"3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4403ab75708ba5abbb6a4718b68254f043deae048fa5d712bd7776d9df4ee1c3\"" Nov 12 20:56:39.561516 containerd[1782]: time="2024-11-12T20:56:39.561490676Z" level=info msg="StartContainer for \"4403ab75708ba5abbb6a4718b68254f043deae048fa5d712bd7776d9df4ee1c3\"" Nov 12 20:56:39.608527 containerd[1782]: time="2024-11-12T20:56:39.608423467Z" level=info msg="StartContainer for \"4403ab75708ba5abbb6a4718b68254f043deae048fa5d712bd7776d9df4ee1c3\" returns successfully" Nov 12 20:56:40.243092 kubelet[3479]: I1112 20:56:40.243049 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-p948p" podStartSLOduration=36.242948344 podStartE2EDuration="36.242948344s" podCreationTimestamp="2024-11-12 20:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:40.241996036 +0000 UTC m=+50.339154008" watchObservedRunningTime="2024-11-12 20:56:40.242948344 +0000 UTC m=+50.340106216" Nov 12 20:56:40.622498 kubelet[3479]: I1112 20:56:40.622230 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:41.023723 containerd[1782]: time="2024-11-12T20:56:41.023326534Z" level=info msg="StopPodSandbox for \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\"" Nov 12 20:56:41.044429 systemd-networkd[1366]: calif50c64650df: Gained IPv6LL Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.070 [INFO][5070] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.070 [INFO][5070] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" iface="eth0" netns="/var/run/netns/cni-589fdd02-30fd-ffbc-c810-f92cd5b6fcd0" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.070 [INFO][5070] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" iface="eth0" netns="/var/run/netns/cni-589fdd02-30fd-ffbc-c810-f92cd5b6fcd0" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.071 [INFO][5070] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" iface="eth0" netns="/var/run/netns/cni-589fdd02-30fd-ffbc-c810-f92cd5b6fcd0" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.071 [INFO][5070] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.071 [INFO][5070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.089 [INFO][5077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.089 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.089 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.094 [WARNING][5077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.094 [INFO][5077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.095 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:41.097520 containerd[1782]: 2024-11-12 20:56:41.096 [INFO][5070] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:41.098889 containerd[1782]: time="2024-11-12T20:56:41.098288858Z" level=info msg="TearDown network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\" successfully" Nov 12 20:56:41.098889 containerd[1782]: time="2024-11-12T20:56:41.098326758Z" level=info msg="StopPodSandbox for \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\" returns successfully" Nov 12 20:56:41.100472 containerd[1782]: time="2024-11-12T20:56:41.099538868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rmfb,Uid:6ef4534f-dec6-4d07-bd02-f445b758fa12,Namespace:calico-system,Attempt:1,}" Nov 12 20:56:41.102072 systemd[1]: run-netns-cni\x2d589fdd02\x2d30fd\x2dffbc\x2dc810\x2df92cd5b6fcd0.mount: Deactivated successfully. Nov 12 20:56:41.108611 systemd-networkd[1366]: cali74ac894f678: Gained IPv6LL Nov 12 20:56:41.360955 systemd-networkd[1366]: calie1f698a26ff: Link UP Nov 12 20:56:41.361750 systemd-networkd[1366]: calie1f698a26ff: Gained carrier Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.257 [INFO][5087] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0 csi-node-driver- calico-system 6ef4534f-dec6-4d07-bd02-f445b758fa12 782 0 2024-11-12 20:56:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.0-a-1543c8d709 csi-node-driver-6rmfb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie1f698a26ff [] []}} ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Namespace="calico-system" Pod="csi-node-driver-6rmfb" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.257 [INFO][5087] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Namespace="calico-system" Pod="csi-node-driver-6rmfb" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.302 [INFO][5099] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" HandleID="k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.314 [INFO][5099] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" HandleID="k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002927a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-1543c8d709", "pod":"csi-node-driver-6rmfb", "timestamp":"2024-11-12 20:56:41.302794859 +0000 UTC"}, Hostname:"ci-4081.2.0-a-1543c8d709", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.314 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.314 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.314 [INFO][5099] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-1543c8d709' Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.316 [INFO][5099] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.321 [INFO][5099] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.329 [INFO][5099] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.331 [INFO][5099] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.335 [INFO][5099] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.335 [INFO][5099] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.337 [INFO][5099] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584 Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.344 [INFO][5099] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.353 [INFO][5099] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.195/26] block=192.168.0.192/26 handle="k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.353 [INFO][5099] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.195/26] handle="k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.353 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:41.389983 containerd[1782]: 2024-11-12 20:56:41.353 [INFO][5099] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.195/26] IPv6=[] ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" HandleID="k8s-pod-network.3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.392595 containerd[1782]: 2024-11-12 20:56:41.356 [INFO][5087] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Namespace="calico-system" Pod="csi-node-driver-6rmfb" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ef4534f-dec6-4d07-bd02-f445b758fa12", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"", Pod:"csi-node-driver-6rmfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1f698a26ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:41.392595 containerd[1782]: 2024-11-12 20:56:41.357 [INFO][5087] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.195/32] ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Namespace="calico-system" Pod="csi-node-driver-6rmfb" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.392595 containerd[1782]: 2024-11-12 20:56:41.357 [INFO][5087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1f698a26ff ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Namespace="calico-system" Pod="csi-node-driver-6rmfb" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.392595 containerd[1782]: 2024-11-12 20:56:41.360 [INFO][5087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Namespace="calico-system" Pod="csi-node-driver-6rmfb" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.392595 containerd[1782]: 2024-11-12 20:56:41.360 [INFO][5087] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Namespace="calico-system" Pod="csi-node-driver-6rmfb" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ef4534f-dec6-4d07-bd02-f445b758fa12", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584", Pod:"csi-node-driver-6rmfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1f698a26ff", MAC:"ea:2f:85:62:47:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:41.392595 containerd[1782]: 2024-11-12 20:56:41.387 [INFO][5087] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584" Namespace="calico-system" Pod="csi-node-driver-6rmfb" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:41.433386 containerd[1782]: time="2024-11-12T20:56:41.433125543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:41.433386 containerd[1782]: time="2024-11-12T20:56:41.433216543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:41.433386 containerd[1782]: time="2024-11-12T20:56:41.433263244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:41.435986 containerd[1782]: time="2024-11-12T20:56:41.433505346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:41.471384 systemd[1]: run-containerd-runc-k8s.io-3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584-runc.yXJJwT.mount: Deactivated successfully. Nov 12 20:56:41.504775 containerd[1782]: time="2024-11-12T20:56:41.504503736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rmfb,Uid:6ef4534f-dec6-4d07-bd02-f445b758fa12,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584\"" Nov 12 20:56:42.023764 containerd[1782]: time="2024-11-12T20:56:42.023342351Z" level=info msg="StopPodSandbox for \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\"" Nov 12 20:56:42.087251 containerd[1782]: time="2024-11-12T20:56:42.087204682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:42.090328 containerd[1782]: time="2024-11-12T20:56:42.090264408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:56:42.095185 containerd[1782]: time="2024-11-12T20:56:42.094471543Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:42.104327 containerd[1782]: time="2024-11-12T20:56:42.104063323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:42.105929 containerd[1782]: time="2024-11-12T20:56:42.105232032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.568380661s" Nov 12 20:56:42.105929 containerd[1782]: time="2024-11-12T20:56:42.105291033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:56:42.106429 containerd[1782]: time="2024-11-12T20:56:42.106390342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:56:42.108809 containerd[1782]: time="2024-11-12T20:56:42.108776862Z" level=info msg="CreateContainer within sandbox \"7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.073 [INFO][5173] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.075 [INFO][5173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" iface="eth0" netns="/var/run/netns/cni-255607ba-83a9-d5e3-4f72-ef349c039576" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.075 [INFO][5173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" iface="eth0" netns="/var/run/netns/cni-255607ba-83a9-d5e3-4f72-ef349c039576" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.076 [INFO][5173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" iface="eth0" netns="/var/run/netns/cni-255607ba-83a9-d5e3-4f72-ef349c039576" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.076 [INFO][5173] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.076 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.097 [INFO][5183] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.097 [INFO][5183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.097 [INFO][5183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.103 [WARNING][5183] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.103 [INFO][5183] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.105 [INFO][5183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:42.110049 containerd[1782]: 2024-11-12 20:56:42.108 [INFO][5173] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:42.110641 containerd[1782]: time="2024-11-12T20:56:42.110211974Z" level=info msg="TearDown network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\" successfully" Nov 12 20:56:42.111212 containerd[1782]: time="2024-11-12T20:56:42.110248374Z" level=info msg="StopPodSandbox for \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\" returns successfully" Nov 12 20:56:42.111869 containerd[1782]: time="2024-11-12T20:56:42.111789887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cc8897bfb-42hq9,Uid:618b1aa4-6bea-46a5-a0d9-a90b9001122c,Namespace:calico-system,Attempt:1,}" Nov 12 20:56:42.167968 containerd[1782]: time="2024-11-12T20:56:42.167925954Z" level=info msg="CreateContainer within sandbox \"7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"48a3355d3cb3c054ec323481c950848f241601e36fa3158b59d02b1b53221a49\"" Nov 12 20:56:42.168942 containerd[1782]: time="2024-11-12T20:56:42.168396258Z" level=info msg="StartContainer for \"48a3355d3cb3c054ec323481c950848f241601e36fa3158b59d02b1b53221a49\"" Nov 12 20:56:42.192506 systemd[1]: run-netns-cni\x2d255607ba\x2d83a9\x2dd5e3\x2d4f72\x2def349c039576.mount: Deactivated successfully. Nov 12 20:56:42.284850 containerd[1782]: time="2024-11-12T20:56:42.283706017Z" level=info msg="StartContainer for \"48a3355d3cb3c054ec323481c950848f241601e36fa3158b59d02b1b53221a49\" returns successfully" Nov 12 20:56:42.339833 systemd-networkd[1366]: cali3f8337c9997: Link UP Nov 12 20:56:42.340633 systemd-networkd[1366]: cali3f8337c9997: Gained carrier Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.239 [INFO][5197] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0 calico-kube-controllers-7cc8897bfb- calico-system 618b1aa4-6bea-46a5-a0d9-a90b9001122c 790 0 2024-11-12 20:56:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cc8897bfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.0-a-1543c8d709 calico-kube-controllers-7cc8897bfb-42hq9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3f8337c9997 [] []}} ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Namespace="calico-system" Pod="calico-kube-controllers-7cc8897bfb-42hq9" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.240 [INFO][5197] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Namespace="calico-system" Pod="calico-kube-controllers-7cc8897bfb-42hq9" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.279 [INFO][5225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" HandleID="k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.293 [INFO][5225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" HandleID="k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-1543c8d709", "pod":"calico-kube-controllers-7cc8897bfb-42hq9", "timestamp":"2024-11-12 20:56:42.279836185 +0000 UTC"}, Hostname:"ci-4081.2.0-a-1543c8d709", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.294 [INFO][5225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.294 [INFO][5225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.295 [INFO][5225] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-1543c8d709' Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.297 [INFO][5225] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.301 [INFO][5225] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.308 [INFO][5225] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.310 [INFO][5225] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.312 [INFO][5225] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.312 [INFO][5225] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.313 [INFO][5225] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066 Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.319 [INFO][5225] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.331 [INFO][5225] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.196/26] block=192.168.0.192/26 handle="k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.331 [INFO][5225] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.196/26] handle="k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.331 [INFO][5225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:42.360670 containerd[1782]: 2024-11-12 20:56:42.331 [INFO][5225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.196/26] IPv6=[] ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" HandleID="k8s-pod-network.d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.362338 containerd[1782]: 2024-11-12 20:56:42.333 [INFO][5197] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Namespace="calico-system" Pod="calico-kube-controllers-7cc8897bfb-42hq9" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0", GenerateName:"calico-kube-controllers-7cc8897bfb-", Namespace:"calico-system", SelfLink:"", UID:"618b1aa4-6bea-46a5-a0d9-a90b9001122c", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cc8897bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"", Pod:"calico-kube-controllers-7cc8897bfb-42hq9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f8337c9997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:42.362338 containerd[1782]: 2024-11-12 20:56:42.333 [INFO][5197] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.196/32] ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Namespace="calico-system" Pod="calico-kube-controllers-7cc8897bfb-42hq9" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.362338 containerd[1782]: 2024-11-12 20:56:42.333 [INFO][5197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f8337c9997 ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Namespace="calico-system" Pod="calico-kube-controllers-7cc8897bfb-42hq9" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.362338 containerd[1782]: 2024-11-12 20:56:42.338 [INFO][5197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Namespace="calico-system" Pod="calico-kube-controllers-7cc8897bfb-42hq9" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.362338 containerd[1782]: 2024-11-12 20:56:42.338 [INFO][5197] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Namespace="calico-system" Pod="calico-kube-controllers-7cc8897bfb-42hq9" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0", GenerateName:"calico-kube-controllers-7cc8897bfb-", Namespace:"calico-system", SelfLink:"", UID:"618b1aa4-6bea-46a5-a0d9-a90b9001122c", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cc8897bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066", Pod:"calico-kube-controllers-7cc8897bfb-42hq9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f8337c9997", MAC:"be:42:8f:15:b7:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:42.362338 containerd[1782]: 2024-11-12 20:56:42.355 [INFO][5197] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066" Namespace="calico-system" Pod="calico-kube-controllers-7cc8897bfb-42hq9" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:42.404946 containerd[1782]: time="2024-11-12T20:56:42.403311811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:42.404946 containerd[1782]: time="2024-11-12T20:56:42.403440713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:42.404946 containerd[1782]: time="2024-11-12T20:56:42.403468513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:42.404946 containerd[1782]: time="2024-11-12T20:56:42.403690715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:42.504481 containerd[1782]: time="2024-11-12T20:56:42.504446953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cc8897bfb-42hq9,Uid:618b1aa4-6bea-46a5-a0d9-a90b9001122c,Namespace:calico-system,Attempt:1,} returns sandbox id \"d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066\"" Nov 12 20:56:43.024271 containerd[1782]: time="2024-11-12T20:56:43.024223875Z" level=info msg="StopPodSandbox for \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\"" Nov 12 20:56:43.025797 containerd[1782]: time="2024-11-12T20:56:43.025762288Z" level=info msg="StopPodSandbox for \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\"" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.101 [INFO][5324] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.101 [INFO][5324] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" iface="eth0" netns="/var/run/netns/cni-7b595b18-5e8d-c665-12a4-b543e5904aa7" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.101 [INFO][5324] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" iface="eth0" netns="/var/run/netns/cni-7b595b18-5e8d-c665-12a4-b543e5904aa7" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.101 [INFO][5324] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" iface="eth0" netns="/var/run/netns/cni-7b595b18-5e8d-c665-12a4-b543e5904aa7" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.101 [INFO][5324] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.101 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.133 [INFO][5337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.133 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.133 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.140 [WARNING][5337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.140 [INFO][5337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.141 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:43.143866 containerd[1782]: 2024-11-12 20:56:43.142 [INFO][5324] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:43.145476 containerd[1782]: time="2024-11-12T20:56:43.145444784Z" level=info msg="TearDown network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\" successfully" Nov 12 20:56:43.145564 containerd[1782]: time="2024-11-12T20:56:43.145547384Z" level=info msg="StopPodSandbox for \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\" returns successfully" Nov 12 20:56:43.147477 containerd[1782]: time="2024-11-12T20:56:43.146476292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774cd9f88-p4f2v,Uid:54c74a88-f218-4ecd-bff2-da8a0009d8be,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:56:43.151298 systemd[1]: run-netns-cni\x2d7b595b18\x2d5e8d\x2dc665\x2d12a4\x2db543e5904aa7.mount: Deactivated successfully. Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.105 [INFO][5319] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.105 [INFO][5319] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" iface="eth0" netns="/var/run/netns/cni-179ff3a4-0ba4-c4a8-c7d2-770e2abe321a" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.105 [INFO][5319] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" iface="eth0" netns="/var/run/netns/cni-179ff3a4-0ba4-c4a8-c7d2-770e2abe321a" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.106 [INFO][5319] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" iface="eth0" netns="/var/run/netns/cni-179ff3a4-0ba4-c4a8-c7d2-770e2abe321a" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.106 [INFO][5319] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.106 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.138 [INFO][5338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.138 [INFO][5338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.141 [INFO][5338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.153 [WARNING][5338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.153 [INFO][5338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.155 [INFO][5338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:43.157627 containerd[1782]: 2024-11-12 20:56:43.156 [INFO][5319] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:43.158080 containerd[1782]: time="2024-11-12T20:56:43.157986588Z" level=info msg="TearDown network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\" successfully" Nov 12 20:56:43.158080 containerd[1782]: time="2024-11-12T20:56:43.158005688Z" level=info msg="StopPodSandbox for \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\" returns successfully" Nov 12 20:56:43.158527 containerd[1782]: time="2024-11-12T20:56:43.158500692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9vlxm,Uid:4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5,Namespace:kube-system,Attempt:1,}" Nov 12 20:56:43.185726 systemd[1]: run-netns-cni\x2d179ff3a4\x2d0ba4\x2dc4a8\x2dc7d2\x2d770e2abe321a.mount: Deactivated successfully. Nov 12 20:56:43.220573 systemd-networkd[1366]: calie1f698a26ff: Gained IPv6LL Nov 12 20:56:43.281915 kubelet[3479]: I1112 20:56:43.281701 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7774cd9f88-w6ktj" podStartSLOduration=27.712309248 podStartE2EDuration="30.281599216s" podCreationTimestamp="2024-11-12 20:56:13 +0000 UTC" firstStartedPulling="2024-11-12 20:56:39.536560369 +0000 UTC m=+49.633718241" lastFinishedPulling="2024-11-12 20:56:42.105850237 +0000 UTC m=+52.203008209" observedRunningTime="2024-11-12 20:56:43.279186696 +0000 UTC m=+53.376344568" watchObservedRunningTime="2024-11-12 20:56:43.281599216 +0000 UTC m=+53.378757188" Nov 12 20:56:43.503260 systemd-networkd[1366]: cali1d6f6b00ea1: Link UP Nov 12 20:56:43.504413 systemd-networkd[1366]: cali1d6f6b00ea1: Gained carrier Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.293 [INFO][5349] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0 calico-apiserver-7774cd9f88- calico-apiserver 54c74a88-f218-4ecd-bff2-da8a0009d8be 802 0 2024-11-12 20:56:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7774cd9f88 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-a-1543c8d709 calico-apiserver-7774cd9f88-p4f2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d6f6b00ea1 [] []}} ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-p4f2v" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.295 [INFO][5349] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-p4f2v" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.386 [INFO][5372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" HandleID="k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.398 [INFO][5372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" HandleID="k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bcbd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-a-1543c8d709", "pod":"calico-apiserver-7774cd9f88-p4f2v", "timestamp":"2024-11-12 20:56:43.386886792 +0000 UTC"}, Hostname:"ci-4081.2.0-a-1543c8d709", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.398 [INFO][5372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.399 [INFO][5372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.399 [INFO][5372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-1543c8d709' Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.400 [INFO][5372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.405 [INFO][5372] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.438 [INFO][5372] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.447 [INFO][5372] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.449 [INFO][5372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.450 [INFO][5372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.451 [INFO][5372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4 Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.463 [INFO][5372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.479 [INFO][5372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.197/26] block=192.168.0.192/26 handle="k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.479 [INFO][5372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.197/26] handle="k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.479 [INFO][5372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:43.554763 containerd[1782]: 2024-11-12 20:56:43.479 [INFO][5372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.197/26] IPv6=[] ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" HandleID="k8s-pod-network.9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.556085 containerd[1782]: 2024-11-12 20:56:43.485 [INFO][5349] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-p4f2v" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0", GenerateName:"calico-apiserver-7774cd9f88-", Namespace:"calico-apiserver", SelfLink:"", UID:"54c74a88-f218-4ecd-bff2-da8a0009d8be", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774cd9f88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"", Pod:"calico-apiserver-7774cd9f88-p4f2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d6f6b00ea1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:43.556085 containerd[1782]: 2024-11-12 20:56:43.486 [INFO][5349] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.197/32] ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-p4f2v" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.556085 containerd[1782]: 2024-11-12 20:56:43.487 [INFO][5349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d6f6b00ea1 ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-p4f2v" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.556085 containerd[1782]: 2024-11-12 20:56:43.508 [INFO][5349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-p4f2v" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.556085 containerd[1782]: 2024-11-12 20:56:43.511 [INFO][5349] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-p4f2v" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0", GenerateName:"calico-apiserver-7774cd9f88-", Namespace:"calico-apiserver", SelfLink:"", UID:"54c74a88-f218-4ecd-bff2-da8a0009d8be", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774cd9f88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4", Pod:"calico-apiserver-7774cd9f88-p4f2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d6f6b00ea1", MAC:"ee:59:d3:08:9c:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:43.556085 containerd[1782]: 2024-11-12 20:56:43.551 [INFO][5349] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4" Namespace="calico-apiserver" Pod="calico-apiserver-7774cd9f88-p4f2v" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:43.588649 systemd-networkd[1366]: cali9aee915b3af: Link UP Nov 12 20:56:43.590755 systemd-networkd[1366]: cali9aee915b3af: Gained carrier Nov 12 20:56:43.632995 containerd[1782]: time="2024-11-12T20:56:43.629934113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:43.632995 containerd[1782]: time="2024-11-12T20:56:43.630825320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:43.632995 containerd[1782]: time="2024-11-12T20:56:43.630844021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:43.632995 containerd[1782]: time="2024-11-12T20:56:43.631302924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.350 [INFO][5358] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0 coredns-76f75df574- kube-system 4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5 803 0 2024-11-12 20:56:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-a-1543c8d709 coredns-76f75df574-9vlxm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9aee915b3af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Namespace="kube-system" Pod="coredns-76f75df574-9vlxm" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.351 [INFO][5358] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Namespace="kube-system" Pod="coredns-76f75df574-9vlxm" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.427 [INFO][5380] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" HandleID="k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.448 [INFO][5380] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" HandleID="k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051dd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-a-1543c8d709", "pod":"coredns-76f75df574-9vlxm", "timestamp":"2024-11-12 20:56:43.42754353 +0000 UTC"}, Hostname:"ci-4081.2.0-a-1543c8d709", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.448 [INFO][5380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.480 [INFO][5380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.480 [INFO][5380] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-1543c8d709' Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.484 [INFO][5380] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.494 [INFO][5380] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.509 [INFO][5380] ipam/ipam.go 489: Trying affinity for 192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.513 [INFO][5380] ipam/ipam.go 155: Attempting to load block cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.518 [INFO][5380] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.0.192/26 host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.518 [INFO][5380] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.0.192/26 handle="k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.550 [INFO][5380] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973 Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.559 [INFO][5380] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.0.192/26 handle="k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.569 [INFO][5380] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.0.198/26] block=192.168.0.192/26 handle="k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.569 [INFO][5380] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.0.198/26] handle="k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" host="ci-4081.2.0-a-1543c8d709" Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.569 [INFO][5380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:43.648332 containerd[1782]: 2024-11-12 20:56:43.569 [INFO][5380] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.0.198/26] IPv6=[] ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" HandleID="k8s-pod-network.2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.650219 containerd[1782]: 2024-11-12 20:56:43.580 [INFO][5358] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Namespace="kube-system" Pod="coredns-76f75df574-9vlxm" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"", Pod:"coredns-76f75df574-9vlxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9aee915b3af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:43.650219 containerd[1782]: 2024-11-12 20:56:43.581 [INFO][5358] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.0.198/32] ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Namespace="kube-system" Pod="coredns-76f75df574-9vlxm" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.650219 containerd[1782]: 2024-11-12 20:56:43.581 [INFO][5358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9aee915b3af ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Namespace="kube-system" Pod="coredns-76f75df574-9vlxm" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.650219 containerd[1782]: 2024-11-12 20:56:43.593 [INFO][5358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Namespace="kube-system" Pod="coredns-76f75df574-9vlxm" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.650219 containerd[1782]: 2024-11-12 20:56:43.594 [INFO][5358] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Namespace="kube-system" Pod="coredns-76f75df574-9vlxm" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973", Pod:"coredns-76f75df574-9vlxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9aee915b3af", MAC:"ca:df:a2:2b:eb:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:43.650219 containerd[1782]: 2024-11-12 20:56:43.639 [INFO][5358] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973" Namespace="kube-system" Pod="coredns-76f75df574-9vlxm" WorkloadEndpoint="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:43.733535 containerd[1782]: time="2024-11-12T20:56:43.732483366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:43.733535 containerd[1782]: time="2024-11-12T20:56:43.732544166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:43.733535 containerd[1782]: time="2024-11-12T20:56:43.732565067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:43.733535 containerd[1782]: time="2024-11-12T20:56:43.732654667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:43.881779 containerd[1782]: time="2024-11-12T20:56:43.879947392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9vlxm,Uid:4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5,Namespace:kube-system,Attempt:1,} returns sandbox id \"2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973\"" Nov 12 20:56:43.891974 containerd[1782]: time="2024-11-12T20:56:43.891872292Z" level=info msg="CreateContainer within sandbox \"2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:56:43.897562 containerd[1782]: time="2024-11-12T20:56:43.897499738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774cd9f88-p4f2v,Uid:54c74a88-f218-4ecd-bff2-da8a0009d8be,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4\"" Nov 12 20:56:43.901713 containerd[1782]: time="2024-11-12T20:56:43.901602472Z" level=info msg="CreateContainer within sandbox \"9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:56:43.935911 containerd[1782]: time="2024-11-12T20:56:43.935877158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:43.940363 containerd[1782]: time="2024-11-12T20:56:43.940242994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:56:43.956399 containerd[1782]: time="2024-11-12T20:56:43.956327428Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:43.979988 containerd[1782]: time="2024-11-12T20:56:43.979898124Z" level=info msg="CreateContainer within sandbox \"2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55d12c12ab20115ad9fe90bc160169af9986585b24824507cd2c40631b1e476a\"" Nov 12 20:56:43.980638 containerd[1782]: time="2024-11-12T20:56:43.980521729Z" level=info msg="StartContainer for \"55d12c12ab20115ad9fe90bc160169af9986585b24824507cd2c40631b1e476a\"" Nov 12 20:56:43.981512 containerd[1782]: time="2024-11-12T20:56:43.981476737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:43.982885 containerd[1782]: time="2024-11-12T20:56:43.982851648Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.876414906s" Nov 12 20:56:43.983068 containerd[1782]: time="2024-11-12T20:56:43.982891649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:56:43.984183 containerd[1782]: time="2024-11-12T20:56:43.984079958Z" level=info msg="CreateContainer within sandbox \"9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"165d859be9f8e49abf2093f3556d63018c02cd02bbc9440481a2e2fec9589a44\"" Nov 12 20:56:43.984443 containerd[1782]: time="2024-11-12T20:56:43.984343061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:56:43.987895 containerd[1782]: time="2024-11-12T20:56:43.985916774Z" level=info msg="StartContainer for \"165d859be9f8e49abf2093f3556d63018c02cd02bbc9440481a2e2fec9589a44\"" Nov 12 20:56:43.990925 containerd[1782]: time="2024-11-12T20:56:43.990772614Z" level=info msg="CreateContainer within sandbox \"3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:56:44.082426 containerd[1782]: time="2024-11-12T20:56:44.081643370Z" level=info msg="CreateContainer within sandbox \"3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a69c811c186c9da2e212a403cb0f3f4807b5321bf1ddd3c3d905882dd56831da\"" Nov 12 20:56:44.087583 containerd[1782]: time="2024-11-12T20:56:44.087272717Z" level=info msg="StartContainer for \"a69c811c186c9da2e212a403cb0f3f4807b5321bf1ddd3c3d905882dd56831da\"" Nov 12 20:56:44.116763 systemd-networkd[1366]: cali3f8337c9997: Gained IPv6LL Nov 12 20:56:44.152632 containerd[1782]: time="2024-11-12T20:56:44.151422650Z" level=info msg="StartContainer for \"55d12c12ab20115ad9fe90bc160169af9986585b24824507cd2c40631b1e476a\" returns successfully" Nov 12 20:56:44.296244 containerd[1782]: time="2024-11-12T20:56:44.295985544Z" level=info msg="StartContainer for \"165d859be9f8e49abf2093f3556d63018c02cd02bbc9440481a2e2fec9589a44\" returns successfully" Nov 12 20:56:44.305923 kubelet[3479]: I1112 20:56:44.305112 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9vlxm" podStartSLOduration=40.304864827 podStartE2EDuration="40.304864827s" podCreationTimestamp="2024-11-12 20:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:44.304302922 +0000 UTC m=+54.401460894" watchObservedRunningTime="2024-11-12 20:56:44.304864827 +0000 UTC m=+54.402022699" Nov 12 20:56:44.369547 containerd[1782]: time="2024-11-12T20:56:44.368887528Z" level=info msg="StartContainer for \"a69c811c186c9da2e212a403cb0f3f4807b5321bf1ddd3c3d905882dd56831da\" returns successfully" Nov 12 20:56:44.628384 systemd-networkd[1366]: cali9aee915b3af: Gained IPv6LL Nov 12 20:56:45.012487 systemd-networkd[1366]: cali1d6f6b00ea1: Gained IPv6LL Nov 12 20:56:45.305979 kubelet[3479]: I1112 20:56:45.305730 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7774cd9f88-p4f2v" podStartSLOduration=32.305682088 podStartE2EDuration="32.305682088s" podCreationTimestamp="2024-11-12 20:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:45.303073064 +0000 UTC m=+55.400231036" watchObservedRunningTime="2024-11-12 20:56:45.305682088 +0000 UTC m=+55.402840060" Nov 12 20:56:46.260287 containerd[1782]: time="2024-11-12T20:56:46.260153948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:46.263410 containerd[1782]: time="2024-11-12T20:56:46.263335277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:56:46.268189 containerd[1782]: time="2024-11-12T20:56:46.268108722Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:46.275556 containerd[1782]: time="2024-11-12T20:56:46.275485590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:46.276418 containerd[1782]: time="2024-11-12T20:56:46.276272297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.291894836s" Nov 12 20:56:46.276418 containerd[1782]: time="2024-11-12T20:56:46.276314598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:56:46.277329 containerd[1782]: time="2024-11-12T20:56:46.277133905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:56:46.297121 kubelet[3479]: I1112 20:56:46.296861 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:46.298266 containerd[1782]: time="2024-11-12T20:56:46.298141600Z" level=info msg="CreateContainer within sandbox \"d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:56:46.342390 containerd[1782]: time="2024-11-12T20:56:46.342356211Z" level=info msg="CreateContainer within sandbox \"d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2756616f03226091699d90f23af0d3c3064e762079b7124573857521713cddc0\"" Nov 12 20:56:46.342943 containerd[1782]: time="2024-11-12T20:56:46.342908616Z" level=info msg="StartContainer for \"2756616f03226091699d90f23af0d3c3064e762079b7124573857521713cddc0\"" Nov 12 20:56:46.485003 containerd[1782]: time="2024-11-12T20:56:46.484951934Z" level=info msg="StartContainer for \"2756616f03226091699d90f23af0d3c3064e762079b7124573857521713cddc0\" returns successfully" Nov 12 20:56:47.333823 kubelet[3479]: I1112 20:56:47.331153 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cc8897bfb-42hq9" podStartSLOduration=30.559824548 podStartE2EDuration="34.330970888s" podCreationTimestamp="2024-11-12 20:56:13 +0000 UTC" firstStartedPulling="2024-11-12 20:56:42.505881764 +0000 UTC m=+52.603039736" lastFinishedPulling="2024-11-12 20:56:46.277028104 +0000 UTC m=+56.374186076" observedRunningTime="2024-11-12 20:56:47.326658548 +0000 UTC m=+57.423816420" watchObservedRunningTime="2024-11-12 20:56:47.330970888 +0000 UTC m=+57.428128760" Nov 12 20:56:47.675019 containerd[1782]: time="2024-11-12T20:56:47.674616678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:47.677915 containerd[1782]: time="2024-11-12T20:56:47.677696206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:56:47.682591 containerd[1782]: time="2024-11-12T20:56:47.682439550Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:47.687618 containerd[1782]: time="2024-11-12T20:56:47.687579898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:47.689071 containerd[1782]: time="2024-11-12T20:56:47.688483106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.4112996s" Nov 12 20:56:47.689071 containerd[1782]: time="2024-11-12T20:56:47.688521907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:56:47.691009 containerd[1782]: time="2024-11-12T20:56:47.690982629Z" level=info msg="CreateContainer within sandbox \"3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:56:47.733503 containerd[1782]: time="2024-11-12T20:56:47.733417823Z" level=info msg="CreateContainer within sandbox \"3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"152daec74ad3e392a2d4fab403b17eb3548b2ab94cc8cd122b88478699f7ca03\"" Nov 12 20:56:47.734050 containerd[1782]: time="2024-11-12T20:56:47.734017829Z" level=info msg="StartContainer for \"152daec74ad3e392a2d4fab403b17eb3548b2ab94cc8cd122b88478699f7ca03\"" Nov 12 20:56:47.797174 containerd[1782]: time="2024-11-12T20:56:47.797117115Z" level=info msg="StartContainer for \"152daec74ad3e392a2d4fab403b17eb3548b2ab94cc8cd122b88478699f7ca03\" returns successfully" Nov 12 20:56:48.124032 kubelet[3479]: I1112 20:56:48.124003 3479 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:56:48.124305 kubelet[3479]: I1112 20:56:48.124048 3479 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:56:50.031113 containerd[1782]: time="2024-11-12T20:56:50.031053851Z" level=info msg="StopPodSandbox for \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\"" Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.073 [WARNING][5739] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973", Pod:"coredns-76f75df574-9vlxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9aee915b3af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.073 [INFO][5739] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.073 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" iface="eth0" netns="" Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.073 [INFO][5739] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.073 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.095 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.095 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.096 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.101 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.101 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.102 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.104666 containerd[1782]: 2024-11-12 20:56:50.103 [INFO][5739] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:50.105666 containerd[1782]: time="2024-11-12T20:56:50.104688035Z" level=info msg="TearDown network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\" successfully" Nov 12 20:56:50.105666 containerd[1782]: time="2024-11-12T20:56:50.104718235Z" level=info msg="StopPodSandbox for \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\" returns successfully" Nov 12 20:56:50.105666 containerd[1782]: time="2024-11-12T20:56:50.105417342Z" level=info msg="RemovePodSandbox for \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\"" Nov 12 20:56:50.105666 containerd[1782]: time="2024-11-12T20:56:50.105454242Z" level=info msg="Forcibly stopping sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\"" Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.147 [WARNING][5763] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4fb55e5a-3cc6-4c6a-abc1-01ce9cff27a5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"2091e928d9a077e69163945ef600b952e2543243dd1e33a6941eb082b3e77973", Pod:"coredns-76f75df574-9vlxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9aee915b3af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.149 [INFO][5763] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.149 [INFO][5763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" iface="eth0" netns="" Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.149 [INFO][5763] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.149 [INFO][5763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.228 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.232 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.232 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.247 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.247 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" HandleID="k8s-pod-network.ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--9vlxm-eth0" Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.249 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.255821 containerd[1782]: 2024-11-12 20:56:50.254 [INFO][5763] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7" Nov 12 20:56:50.256889 containerd[1782]: time="2024-11-12T20:56:50.255876638Z" level=info msg="TearDown network for sandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\" successfully" Nov 12 20:56:50.267322 containerd[1782]: time="2024-11-12T20:56:50.267275844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:50.268076 containerd[1782]: time="2024-11-12T20:56:50.267364745Z" level=info msg="RemovePodSandbox \"ce18a685bfb6f647e077248ce5d088bde9ffcdcd9799252c0ae1f85d658ba4a7\" returns successfully" Nov 12 20:56:50.268076 containerd[1782]: time="2024-11-12T20:56:50.267891950Z" level=info msg="StopPodSandbox for \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\"" Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.318 [WARNING][5789] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0", GenerateName:"calico-apiserver-7774cd9f88-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6403310-33ca-4d11-933f-4e5fed185033", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774cd9f88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc", Pod:"calico-apiserver-7774cd9f88-w6ktj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74ac894f678", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.318 [INFO][5789] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.318 [INFO][5789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" iface="eth0" netns="" Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.318 [INFO][5789] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.318 [INFO][5789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.336 [INFO][5795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.336 [INFO][5795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.336 [INFO][5795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.341 [WARNING][5795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.342 [INFO][5795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.343 [INFO][5795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.344998 containerd[1782]: 2024-11-12 20:56:50.344 [INFO][5789] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:50.344998 containerd[1782]: time="2024-11-12T20:56:50.344819764Z" level=info msg="TearDown network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\" successfully" Nov 12 20:56:50.344998 containerd[1782]: time="2024-11-12T20:56:50.344854964Z" level=info msg="StopPodSandbox for \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\" returns successfully" Nov 12 20:56:50.345732 containerd[1782]: time="2024-11-12T20:56:50.345456770Z" level=info msg="RemovePodSandbox for \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\"" Nov 12 20:56:50.345732 containerd[1782]: time="2024-11-12T20:56:50.345499870Z" level=info msg="Forcibly stopping sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\"" Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.381 [WARNING][5813] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0", GenerateName:"calico-apiserver-7774cd9f88-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6403310-33ca-4d11-933f-4e5fed185033", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774cd9f88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"7560b403dbab3094a9dfb9faeaa8397a461088b7d29a16d3754ccd28c2e5c6cc", Pod:"calico-apiserver-7774cd9f88-w6ktj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74ac894f678", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.381 [INFO][5813] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.381 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" iface="eth0" netns="" Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.381 [INFO][5813] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.382 [INFO][5813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.401 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.401 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.401 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.407 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.407 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" HandleID="k8s-pod-network.50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--w6ktj-eth0" Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.408 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.410896 containerd[1782]: 2024-11-12 20:56:50.409 [INFO][5813] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c" Nov 12 20:56:50.411545 containerd[1782]: time="2024-11-12T20:56:50.410960078Z" level=info msg="TearDown network for sandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\" successfully" Nov 12 20:56:50.420893 containerd[1782]: time="2024-11-12T20:56:50.420852670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:50.421000 containerd[1782]: time="2024-11-12T20:56:50.420933070Z" level=info msg="RemovePodSandbox \"50be6bc63b35f9d8541011a5d01ad952b35d8a044cc503bf4e155653eac1b99c\" returns successfully" Nov 12 20:56:50.421576 containerd[1782]: time="2024-11-12T20:56:50.421471075Z" level=info msg="StopPodSandbox for \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\"" Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.451 [WARNING][5839] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ef4534f-dec6-4d07-bd02-f445b758fa12", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584", Pod:"csi-node-driver-6rmfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1f698a26ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.451 [INFO][5839] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.451 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" iface="eth0" netns="" Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.451 [INFO][5839] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.451 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.472 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.472 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.472 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.476 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.476 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.477 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.479707 containerd[1782]: 2024-11-12 20:56:50.478 [INFO][5839] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:50.480458 containerd[1782]: time="2024-11-12T20:56:50.479719016Z" level=info msg="TearDown network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\" successfully" Nov 12 20:56:50.480458 containerd[1782]: time="2024-11-12T20:56:50.479746416Z" level=info msg="StopPodSandbox for \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\" returns successfully" Nov 12 20:56:50.480458 containerd[1782]: time="2024-11-12T20:56:50.480395022Z" level=info msg="RemovePodSandbox for \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\"" Nov 12 20:56:50.480458 containerd[1782]: time="2024-11-12T20:56:50.480430423Z" level=info msg="Forcibly stopping sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\"" Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.511 [WARNING][5863] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ef4534f-dec6-4d07-bd02-f445b758fa12", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"3c717fbbd4cf302d8a0982b68e1a2324f3bbea39b412e21a1cd940d14ce29584", Pod:"csi-node-driver-6rmfb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1f698a26ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.511 [INFO][5863] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.511 [INFO][5863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" iface="eth0" netns="" Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.511 [INFO][5863] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.511 [INFO][5863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.531 [INFO][5869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.531 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.531 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.536 [WARNING][5869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.536 [INFO][5869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" HandleID="k8s-pod-network.6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Workload="ci--4081.2.0--a--1543c8d709-k8s-csi--node--driver--6rmfb-eth0" Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.537 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.538844 containerd[1782]: 2024-11-12 20:56:50.537 [INFO][5863] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1" Nov 12 20:56:50.539634 containerd[1782]: time="2024-11-12T20:56:50.538872065Z" level=info msg="TearDown network for sandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\" successfully" Nov 12 20:56:50.547967 containerd[1782]: time="2024-11-12T20:56:50.547914049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:50.548067 containerd[1782]: time="2024-11-12T20:56:50.547995250Z" level=info msg="RemovePodSandbox \"6812e0f1750cbb58361c0511497bf291e1fa3c09e5ae34eb2ac0be1801fb3be1\" returns successfully" Nov 12 20:56:50.548704 containerd[1782]: time="2024-11-12T20:56:50.548672056Z" level=info msg="StopPodSandbox for \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\"" Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.584 [WARNING][5887] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e4125a79-bb3b-439b-8dfa-c76cc22a17a7", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb", Pod:"coredns-76f75df574-p948p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif50c64650df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.584 [INFO][5887] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.584 [INFO][5887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" iface="eth0" netns="" Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.584 [INFO][5887] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.584 [INFO][5887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.602 [INFO][5893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.602 [INFO][5893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.602 [INFO][5893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.607 [WARNING][5893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.607 [INFO][5893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.609 [INFO][5893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.613196 containerd[1782]: 2024-11-12 20:56:50.611 [INFO][5887] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:50.613196 containerd[1782]: time="2024-11-12T20:56:50.613136254Z" level=info msg="TearDown network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\" successfully" Nov 12 20:56:50.613196 containerd[1782]: time="2024-11-12T20:56:50.613187755Z" level=info msg="StopPodSandbox for \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\" returns successfully" Nov 12 20:56:50.613913 containerd[1782]: time="2024-11-12T20:56:50.613747960Z" level=info msg="RemovePodSandbox for \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\"" Nov 12 20:56:50.613913 containerd[1782]: time="2024-11-12T20:56:50.613778460Z" level=info msg="Forcibly stopping sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\"" Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.647 [WARNING][5911] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e4125a79-bb3b-439b-8dfa-c76cc22a17a7", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"3c5c3b1d785bc04c028821f0ce2f2664c2467e70c2f1c423bbed560c92fcd0cb", Pod:"coredns-76f75df574-p948p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif50c64650df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.647 [INFO][5911] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.647 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" iface="eth0" netns="" Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.647 [INFO][5911] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.647 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.668 [INFO][5918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.668 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.668 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.676 [WARNING][5918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.676 [INFO][5918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" HandleID="k8s-pod-network.a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Workload="ci--4081.2.0--a--1543c8d709-k8s-coredns--76f75df574--p948p-eth0" Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.677 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.681376 containerd[1782]: 2024-11-12 20:56:50.679 [INFO][5911] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d" Nov 12 20:56:50.681376 containerd[1782]: time="2024-11-12T20:56:50.680994384Z" level=info msg="TearDown network for sandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\" successfully" Nov 12 20:56:50.709917 containerd[1782]: time="2024-11-12T20:56:50.709778652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:50.710207 containerd[1782]: time="2024-11-12T20:56:50.710092254Z" level=info msg="RemovePodSandbox \"a6a3b91729a2217726fb912806d8cb8a1c2d5ea15e4fef153a3f77f4b33b692d\" returns successfully" Nov 12 20:56:50.711239 containerd[1782]: time="2024-11-12T20:56:50.711151464Z" level=info msg="StopPodSandbox for \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\"" Nov 12 20:56:50.723714 kubelet[3479]: I1112 20:56:50.720633 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:50.759596 kubelet[3479]: I1112 20:56:50.759460 3479 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-6rmfb" podStartSLOduration=31.57644845 podStartE2EDuration="37.759381512s" podCreationTimestamp="2024-11-12 20:56:13 +0000 UTC" firstStartedPulling="2024-11-12 20:56:41.505976048 +0000 UTC m=+51.603133920" lastFinishedPulling="2024-11-12 20:56:47.68890911 +0000 UTC m=+57.786066982" observedRunningTime="2024-11-12 20:56:48.331892179 +0000 UTC m=+58.429050051" watchObservedRunningTime="2024-11-12 20:56:50.759381512 +0000 UTC m=+60.856539384" Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.774 [WARNING][5936] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0", GenerateName:"calico-kube-controllers-7cc8897bfb-", Namespace:"calico-system", SelfLink:"", UID:"618b1aa4-6bea-46a5-a0d9-a90b9001122c", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cc8897bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066", Pod:"calico-kube-controllers-7cc8897bfb-42hq9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f8337c9997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.776 [INFO][5936] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.776 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" iface="eth0" netns="" Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.776 [INFO][5936] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.776 [INFO][5936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.819 [INFO][5943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.819 [INFO][5943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.819 [INFO][5943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.830 [WARNING][5943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.830 [INFO][5943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.831 [INFO][5943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.834823 containerd[1782]: 2024-11-12 20:56:50.833 [INFO][5936] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:50.836014 containerd[1782]: time="2024-11-12T20:56:50.835263216Z" level=info msg="TearDown network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\" successfully" Nov 12 20:56:50.836014 containerd[1782]: time="2024-11-12T20:56:50.835297217Z" level=info msg="StopPodSandbox for \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\" returns successfully" Nov 12 20:56:50.836122 containerd[1782]: time="2024-11-12T20:56:50.836066024Z" level=info msg="RemovePodSandbox for \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\"" Nov 12 20:56:50.836122 containerd[1782]: time="2024-11-12T20:56:50.836102324Z" level=info msg="Forcibly stopping sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\"" Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.878 [WARNING][5963] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0", GenerateName:"calico-kube-controllers-7cc8897bfb-", Namespace:"calico-system", SelfLink:"", UID:"618b1aa4-6bea-46a5-a0d9-a90b9001122c", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cc8897bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"d7e3f723c490438f8cbac6f8fd17aebd228778fd5d1fd1087f1aea4190c76066", Pod:"calico-kube-controllers-7cc8897bfb-42hq9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3f8337c9997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.878 [INFO][5963] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.878 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" iface="eth0" netns="" Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.880 [INFO][5963] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.880 [INFO][5963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.910 [INFO][5970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.910 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.910 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.915 [WARNING][5970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.915 [INFO][5970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" HandleID="k8s-pod-network.5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--kube--controllers--7cc8897bfb--42hq9-eth0" Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.917 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.919087 containerd[1782]: 2024-11-12 20:56:50.918 [INFO][5963] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7" Nov 12 20:56:50.919087 containerd[1782]: time="2024-11-12T20:56:50.919031994Z" level=info msg="TearDown network for sandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\" successfully" Nov 12 20:56:50.929515 containerd[1782]: time="2024-11-12T20:56:50.929413290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:50.929978 containerd[1782]: time="2024-11-12T20:56:50.929574792Z" level=info msg="RemovePodSandbox \"5dcea36c2f30d9a0c1af35ba449a710b0fbd8126496079f7ef292240286e21a7\" returns successfully" Nov 12 20:56:50.930495 containerd[1782]: time="2024-11-12T20:56:50.930467700Z" level=info msg="StopPodSandbox for \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\"" Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.962 [WARNING][5988] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0", GenerateName:"calico-apiserver-7774cd9f88-", Namespace:"calico-apiserver", SelfLink:"", UID:"54c74a88-f218-4ecd-bff2-da8a0009d8be", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774cd9f88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4", Pod:"calico-apiserver-7774cd9f88-p4f2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d6f6b00ea1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.962 [INFO][5988] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.962 [INFO][5988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" iface="eth0" netns="" Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.962 [INFO][5988] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.962 [INFO][5988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.980 [INFO][5994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.980 [INFO][5994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.980 [INFO][5994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.985 [WARNING][5994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.986 [INFO][5994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.987 [INFO][5994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:50.989405 containerd[1782]: 2024-11-12 20:56:50.988 [INFO][5988] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:50.990324 containerd[1782]: time="2024-11-12T20:56:50.989450448Z" level=info msg="TearDown network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\" successfully" Nov 12 20:56:50.990324 containerd[1782]: time="2024-11-12T20:56:50.989478948Z" level=info msg="StopPodSandbox for \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\" returns successfully" Nov 12 20:56:50.990324 containerd[1782]: time="2024-11-12T20:56:50.990208655Z" level=info msg="RemovePodSandbox for \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\"" Nov 12 20:56:50.990324 containerd[1782]: time="2024-11-12T20:56:50.990243355Z" level=info msg="Forcibly stopping sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\"" Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.024 [WARNING][6012] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0", GenerateName:"calico-apiserver-7774cd9f88-", Namespace:"calico-apiserver", SelfLink:"", UID:"54c74a88-f218-4ecd-bff2-da8a0009d8be", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774cd9f88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-1543c8d709", ContainerID:"9728fb81e4072efd0405efc23e07d834a3442056e2344d303357fd53fbc4dbe4", Pod:"calico-apiserver-7774cd9f88-p4f2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d6f6b00ea1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.024 [INFO][6012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.025 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" iface="eth0" netns="" Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.025 [INFO][6012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.025 [INFO][6012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.044 [INFO][6019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.044 [INFO][6019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.044 [INFO][6019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.049 [WARNING][6019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.049 [INFO][6019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" HandleID="k8s-pod-network.e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Workload="ci--4081.2.0--a--1543c8d709-k8s-calico--apiserver--7774cd9f88--p4f2v-eth0" Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.051 [INFO][6019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:51.052831 containerd[1782]: 2024-11-12 20:56:51.051 [INFO][6012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f" Nov 12 20:56:51.054014 containerd[1782]: time="2024-11-12T20:56:51.052884336Z" level=info msg="TearDown network for sandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\" successfully" Nov 12 20:56:51.064671 containerd[1782]: time="2024-11-12T20:56:51.064623545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:56:51.064804 containerd[1782]: time="2024-11-12T20:56:51.064705046Z" level=info msg="RemovePodSandbox \"e4ac6c754d1eed588b1fd46a4fa62e21226e93e00b6f525ae24116e3b8e53f5f\" returns successfully" Nov 12 20:56:55.099756 systemd[1]: run-containerd-runc-k8s.io-2756616f03226091699d90f23af0d3c3064e762079b7124573857521713cddc0-runc.QFe3Af.mount: Deactivated successfully. Nov 12 20:57:20.623512 systemd[1]: Started sshd@7-10.200.8.44:22-10.200.16.10:37344.service - OpenSSH per-connection server daemon (10.200.16.10:37344). Nov 12 20:57:21.249415 sshd[6089]: Accepted publickey for core from 10.200.16.10 port 37344 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:21.251066 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:21.255397 systemd-logind[1761]: New session 10 of user core. Nov 12 20:57:21.261403 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:57:21.754961 sshd[6089]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:21.759603 systemd[1]: sshd@7-10.200.8.44:22-10.200.16.10:37344.service: Deactivated successfully. Nov 12 20:57:21.763706 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:57:21.764649 systemd-logind[1761]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:57:21.765639 systemd-logind[1761]: Removed session 10. Nov 12 20:57:26.862580 systemd[1]: Started sshd@8-10.200.8.44:22-10.200.16.10:37356.service - OpenSSH per-connection server daemon (10.200.16.10:37356). Nov 12 20:57:27.484738 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 37356 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:27.486192 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:27.490137 systemd-logind[1761]: New session 11 of user core. Nov 12 20:57:27.493564 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:57:27.986342 sshd[6125]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:27.991569 systemd[1]: sshd@8-10.200.8.44:22-10.200.16.10:37356.service: Deactivated successfully. Nov 12 20:57:27.995664 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:57:27.996537 systemd-logind[1761]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:57:27.997528 systemd-logind[1761]: Removed session 11. Nov 12 20:57:33.093462 systemd[1]: Started sshd@9-10.200.8.44:22-10.200.16.10:45884.service - OpenSSH per-connection server daemon (10.200.16.10:45884). Nov 12 20:57:33.711305 sshd[6140]: Accepted publickey for core from 10.200.16.10 port 45884 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:33.713108 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:33.718684 systemd-logind[1761]: New session 12 of user core. Nov 12 20:57:33.722447 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:57:34.216245 sshd[6140]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:34.219453 systemd[1]: sshd@9-10.200.8.44:22-10.200.16.10:45884.service: Deactivated successfully. Nov 12 20:57:34.223941 systemd-logind[1761]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:57:34.225996 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:57:34.227075 systemd-logind[1761]: Removed session 12. Nov 12 20:57:39.322878 systemd[1]: Started sshd@10-10.200.8.44:22-10.200.16.10:49884.service - OpenSSH per-connection server daemon (10.200.16.10:49884). Nov 12 20:57:39.949600 sshd[6157]: Accepted publickey for core from 10.200.16.10 port 49884 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:39.950975 sshd[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:39.955047 systemd-logind[1761]: New session 13 of user core. Nov 12 20:57:39.961668 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:57:40.446677 sshd[6157]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:40.452660 systemd[1]: sshd@10-10.200.8.44:22-10.200.16.10:49884.service: Deactivated successfully. Nov 12 20:57:40.457221 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:57:40.458190 systemd-logind[1761]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:57:40.459116 systemd-logind[1761]: Removed session 13. Nov 12 20:57:40.557457 systemd[1]: Started sshd@11-10.200.8.44:22-10.200.16.10:49898.service - OpenSSH per-connection server daemon (10.200.16.10:49898). Nov 12 20:57:41.175211 sshd[6172]: Accepted publickey for core from 10.200.16.10 port 49898 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:41.176949 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:41.182635 systemd-logind[1761]: New session 14 of user core. Nov 12 20:57:41.189639 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:57:41.708229 sshd[6172]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:41.713149 systemd[1]: sshd@11-10.200.8.44:22-10.200.16.10:49898.service: Deactivated successfully. Nov 12 20:57:41.717798 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:57:41.718814 systemd-logind[1761]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:57:41.719843 systemd-logind[1761]: Removed session 14. Nov 12 20:57:41.815469 systemd[1]: Started sshd@12-10.200.8.44:22-10.200.16.10:49902.service - OpenSSH per-connection server daemon (10.200.16.10:49902). Nov 12 20:57:42.436086 sshd[6205]: Accepted publickey for core from 10.200.16.10 port 49902 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:42.439634 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:42.444316 systemd-logind[1761]: New session 15 of user core. Nov 12 20:57:42.449460 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:57:42.937227 sshd[6205]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:42.941021 systemd[1]: sshd@12-10.200.8.44:22-10.200.16.10:49902.service: Deactivated successfully. Nov 12 20:57:42.947146 systemd-logind[1761]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:57:42.947627 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:57:42.948962 systemd-logind[1761]: Removed session 15. Nov 12 20:57:48.043616 systemd[1]: Started sshd@13-10.200.8.44:22-10.200.16.10:49918.service - OpenSSH per-connection server daemon (10.200.16.10:49918). Nov 12 20:57:48.663139 sshd[6242]: Accepted publickey for core from 10.200.16.10 port 49918 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:48.664731 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:48.668912 systemd-logind[1761]: New session 16 of user core. Nov 12 20:57:48.675855 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:57:49.163711 sshd[6242]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:49.166825 systemd[1]: sshd@13-10.200.8.44:22-10.200.16.10:49918.service: Deactivated successfully. Nov 12 20:57:49.172733 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:57:49.173608 systemd-logind[1761]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:57:49.174539 systemd-logind[1761]: Removed session 16. Nov 12 20:57:54.275847 systemd[1]: Started sshd@14-10.200.8.44:22-10.200.16.10:37242.service - OpenSSH per-connection server daemon (10.200.16.10:37242). Nov 12 20:57:54.893110 sshd[6258]: Accepted publickey for core from 10.200.16.10 port 37242 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:54.895358 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:54.900976 systemd-logind[1761]: New session 17 of user core. Nov 12 20:57:54.904408 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:57:55.394069 sshd[6258]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:55.397400 systemd[1]: sshd@14-10.200.8.44:22-10.200.16.10:37242.service: Deactivated successfully. Nov 12 20:57:55.402924 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:57:55.404117 systemd-logind[1761]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:57:55.405231 systemd-logind[1761]: Removed session 17. Nov 12 20:58:00.504939 systemd[1]: Started sshd@15-10.200.8.44:22-10.200.16.10:40002.service - OpenSSH per-connection server daemon (10.200.16.10:40002). Nov 12 20:58:01.134890 sshd[6299]: Accepted publickey for core from 10.200.16.10 port 40002 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:01.136807 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:01.141210 systemd-logind[1761]: New session 18 of user core. Nov 12 20:58:01.148593 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:58:01.645904 sshd[6299]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:01.648801 systemd[1]: sshd@15-10.200.8.44:22-10.200.16.10:40002.service: Deactivated successfully. Nov 12 20:58:01.654525 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:58:01.655368 systemd-logind[1761]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:58:01.656286 systemd-logind[1761]: Removed session 18. Nov 12 20:58:01.755444 systemd[1]: Started sshd@16-10.200.8.44:22-10.200.16.10:40008.service - OpenSSH per-connection server daemon (10.200.16.10:40008). Nov 12 20:58:02.389689 sshd[6312]: Accepted publickey for core from 10.200.16.10 port 40008 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:02.391400 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:02.396646 systemd-logind[1761]: New session 19 of user core. Nov 12 20:58:02.401421 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:58:02.972150 sshd[6312]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:02.974931 systemd[1]: sshd@16-10.200.8.44:22-10.200.16.10:40008.service: Deactivated successfully. Nov 12 20:58:02.980516 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:58:02.980565 systemd-logind[1761]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:58:02.982633 systemd-logind[1761]: Removed session 19. Nov 12 20:58:03.079452 systemd[1]: Started sshd@17-10.200.8.44:22-10.200.16.10:40014.service - OpenSSH per-connection server daemon (10.200.16.10:40014). Nov 12 20:58:03.835150 sshd[6323]: Accepted publickey for core from 10.200.16.10 port 40014 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:03.836779 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:03.841893 systemd-logind[1761]: New session 20 of user core. Nov 12 20:58:03.848525 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:58:06.060821 sshd[6323]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:06.065798 systemd[1]: sshd@17-10.200.8.44:22-10.200.16.10:40014.service: Deactivated successfully. Nov 12 20:58:06.069984 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:58:06.070777 systemd-logind[1761]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:58:06.071757 systemd-logind[1761]: Removed session 20. Nov 12 20:58:06.166567 systemd[1]: Started sshd@18-10.200.8.44:22-10.200.16.10:40026.service - OpenSSH per-connection server daemon (10.200.16.10:40026). Nov 12 20:58:06.787592 sshd[6346]: Accepted publickey for core from 10.200.16.10 port 40026 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:06.789394 sshd[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:06.794988 systemd-logind[1761]: New session 21 of user core. Nov 12 20:58:06.799444 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:58:07.382571 sshd[6346]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:07.386307 systemd[1]: sshd@18-10.200.8.44:22-10.200.16.10:40026.service: Deactivated successfully. Nov 12 20:58:07.392942 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:58:07.393907 systemd-logind[1761]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:58:07.394949 systemd-logind[1761]: Removed session 21. Nov 12 20:58:07.488442 systemd[1]: Started sshd@19-10.200.8.44:22-10.200.16.10:40036.service - OpenSSH per-connection server daemon (10.200.16.10:40036). Nov 12 20:58:08.108629 sshd[6358]: Accepted publickey for core from 10.200.16.10 port 40036 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:08.110360 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:08.115119 systemd-logind[1761]: New session 22 of user core. Nov 12 20:58:08.123722 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:58:08.604924 sshd[6358]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:08.609567 systemd[1]: sshd@19-10.200.8.44:22-10.200.16.10:40036.service: Deactivated successfully. Nov 12 20:58:08.614692 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:58:08.615568 systemd-logind[1761]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:58:08.616561 systemd-logind[1761]: Removed session 22. Nov 12 20:58:13.713512 systemd[1]: Started sshd@20-10.200.8.44:22-10.200.16.10:33138.service - OpenSSH per-connection server daemon (10.200.16.10:33138). Nov 12 20:58:14.342780 sshd[6412]: Accepted publickey for core from 10.200.16.10 port 33138 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:14.344346 sshd[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:14.348887 systemd-logind[1761]: New session 23 of user core. Nov 12 20:58:14.352465 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:58:14.843820 sshd[6412]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:14.847120 systemd[1]: sshd@20-10.200.8.44:22-10.200.16.10:33138.service: Deactivated successfully. Nov 12 20:58:14.852179 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:58:14.853203 systemd-logind[1761]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:58:14.854597 systemd-logind[1761]: Removed session 23. Nov 12 20:58:19.952454 systemd[1]: Started sshd@21-10.200.8.44:22-10.200.16.10:34918.service - OpenSSH per-connection server daemon (10.200.16.10:34918). Nov 12 20:58:20.572059 sshd[6425]: Accepted publickey for core from 10.200.16.10 port 34918 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:20.574711 sshd[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:20.585227 systemd-logind[1761]: New session 24 of user core. Nov 12 20:58:20.591122 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:58:21.069292 sshd[6425]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:21.074203 systemd[1]: sshd@21-10.200.8.44:22-10.200.16.10:34918.service: Deactivated successfully. Nov 12 20:58:21.078750 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:58:21.079565 systemd-logind[1761]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:58:21.080602 systemd-logind[1761]: Removed session 24. Nov 12 20:58:26.179928 systemd[1]: Started sshd@22-10.200.8.44:22-10.200.16.10:34922.service - OpenSSH per-connection server daemon (10.200.16.10:34922). Nov 12 20:58:26.798036 sshd[6456]: Accepted publickey for core from 10.200.16.10 port 34922 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:26.800473 sshd[6456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:26.805400 systemd-logind[1761]: New session 25 of user core. Nov 12 20:58:26.809405 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:58:27.301307 sshd[6456]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:27.304433 systemd[1]: sshd@22-10.200.8.44:22-10.200.16.10:34922.service: Deactivated successfully. Nov 12 20:58:27.309918 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:58:27.311054 systemd-logind[1761]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:58:27.312048 systemd-logind[1761]: Removed session 25. Nov 12 20:58:32.409694 systemd[1]: Started sshd@23-10.200.8.44:22-10.200.16.10:39966.service - OpenSSH per-connection server daemon (10.200.16.10:39966). Nov 12 20:58:33.036401 sshd[6470]: Accepted publickey for core from 10.200.16.10 port 39966 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:33.038112 sshd[6470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:33.043558 systemd-logind[1761]: New session 26 of user core. Nov 12 20:58:33.045769 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:58:33.531391 sshd[6470]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:33.535452 systemd[1]: sshd@23-10.200.8.44:22-10.200.16.10:39966.service: Deactivated successfully. Nov 12 20:58:33.539898 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:58:33.540815 systemd-logind[1761]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:58:33.541836 systemd-logind[1761]: Removed session 26. Nov 12 20:58:38.639750 systemd[1]: Started sshd@24-10.200.8.44:22-10.200.16.10:47946.service - OpenSSH per-connection server daemon (10.200.16.10:47946). Nov 12 20:58:39.257045 sshd[6486]: Accepted publickey for core from 10.200.16.10 port 47946 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:39.258479 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:39.263369 systemd-logind[1761]: New session 27 of user core. Nov 12 20:58:39.270792 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:58:39.755130 sshd[6486]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:39.758361 systemd[1]: sshd@24-10.200.8.44:22-10.200.16.10:47946.service: Deactivated successfully. Nov 12 20:58:39.763255 systemd-logind[1761]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:58:39.763978 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:58:39.765299 systemd-logind[1761]: Removed session 27.