Jul 7 00:09:26.013689 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 00:09:26.013705 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:26.013712 kernel: BIOS-provided physical RAM map: Jul 7 00:09:26.013716 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 7 00:09:26.013720 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 7 00:09:26.013724 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 7 00:09:26.013729 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 7 00:09:26.013733 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 7 00:09:26.013737 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a4efff] usable Jul 7 00:09:26.013741 kernel: BIOS-e820: [mem 0x0000000081a4f000-0x0000000081a4ffff] ACPI NVS Jul 7 00:09:26.013745 kernel: BIOS-e820: [mem 0x0000000081a50000-0x0000000081a50fff] reserved Jul 7 00:09:26.013750 kernel: BIOS-e820: [mem 0x0000000081a51000-0x000000008afcdfff] usable Jul 7 00:09:26.013755 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Jul 7 00:09:26.013759 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Jul 7 00:09:26.013764 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Jul 7 00:09:26.013769 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Jul 7 00:09:26.013775 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 7 00:09:26.013779 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 7 00:09:26.013784 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 00:09:26.013789 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 7 00:09:26.013793 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 7 00:09:26.013798 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 7 00:09:26.013803 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 7 00:09:26.013807 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 7 00:09:26.013812 kernel: NX (Execute Disable) protection: active Jul 7 00:09:26.013816 kernel: APIC: Static calls initialized Jul 7 00:09:26.013821 kernel: SMBIOS 3.2.1 present. Jul 7 00:09:26.013826 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Jul 7 00:09:26.013832 kernel: tsc: Detected 3400.000 MHz processor Jul 7 00:09:26.013836 kernel: tsc: Detected 3399.906 MHz TSC Jul 7 00:09:26.013841 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:09:26.013846 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:09:26.013851 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 7 00:09:26.013856 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jul 7 00:09:26.013861 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:09:26.013866 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 7 00:09:26.013870 kernel: Using GB pages for direct mapping Jul 7 00:09:26.013876 kernel: ACPI: Early table checksum verification disabled Jul 7 00:09:26.013881 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 7 00:09:26.013886 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 7 00:09:26.013893 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Jul 7 00:09:26.013898 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 7 00:09:26.013903 kernel: ACPI: FACS 0x000000008C66DF80 000040 Jul 7 00:09:26.013908 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Jul 7 00:09:26.013914 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Jul 7 00:09:26.013919 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 7 00:09:26.013924 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 7 00:09:26.013929 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 7 00:09:26.013934 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 7 00:09:26.013939 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 7 00:09:26.013944 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 7 00:09:26.013951 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013956 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 7 00:09:26.013961 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 7 00:09:26.013966 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013971 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013976 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 7 00:09:26.013981 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 7 00:09:26.013986 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013991 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013997 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 7 00:09:26.014002 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 7 00:09:26.014007 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 7 00:09:26.014012 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 7 00:09:26.014017 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 7 00:09:26.014023 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 7 00:09:26.014028 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 7 00:09:26.014033 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 7 00:09:26.014039 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 7 00:09:26.014044 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 7 00:09:26.014049 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 7 00:09:26.014054 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Jul 7 00:09:26.014059 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Jul 7 00:09:26.014064 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Jul 7 00:09:26.014069 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Jul 7 00:09:26.014074 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Jul 7 00:09:26.014079 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Jul 7 00:09:26.014085 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Jul 7 00:09:26.014090 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Jul 7 00:09:26.014095 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Jul 7 00:09:26.014100 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Jul 7 00:09:26.014105 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Jul 7 00:09:26.014110 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Jul 7 00:09:26.014115 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Jul 7 00:09:26.014120 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Jul 7 00:09:26.014128 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Jul 7 00:09:26.014134 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Jul 7 00:09:26.014139 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Jul 7 00:09:26.014144 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Jul 7 00:09:26.014149 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Jul 7 00:09:26.014154 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Jul 7 00:09:26.014159 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Jul 7 00:09:26.014164 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Jul 7 00:09:26.014169 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Jul 7 00:09:26.014174 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Jul 7 00:09:26.014180 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Jul 7 00:09:26.014185 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Jul 7 00:09:26.014190 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Jul 7 00:09:26.014195 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Jul 7 00:09:26.014200 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Jul 7 00:09:26.014205 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Jul 7 00:09:26.014210 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Jul 7 00:09:26.014215 kernel: No NUMA configuration found Jul 7 00:09:26.014220 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 7 00:09:26.014225 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jul 7 00:09:26.014231 kernel: Zone ranges: Jul 7 00:09:26.014237 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:09:26.014242 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 00:09:26.014247 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 7 00:09:26.014252 kernel: Movable zone start for each node Jul 7 00:09:26.014257 kernel: Early memory node ranges Jul 7 00:09:26.014262 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 7 00:09:26.014267 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 7 00:09:26.014272 kernel: node 0: [mem 0x0000000040400000-0x0000000081a4efff] Jul 7 00:09:26.014278 kernel: node 0: [mem 0x0000000081a51000-0x000000008afcdfff] Jul 7 00:09:26.014283 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Jul 7 00:09:26.014288 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 7 00:09:26.014294 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 7 00:09:26.014302 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 7 00:09:26.014308 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:09:26.014314 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 7 00:09:26.014319 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 7 00:09:26.014326 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 7 00:09:26.014331 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 7 00:09:26.014336 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Jul 7 00:09:26.014342 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 7 00:09:26.014347 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 7 00:09:26.014353 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 7 00:09:26.014358 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 7 00:09:26.014364 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 7 00:09:26.014369 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 7 00:09:26.014375 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 7 00:09:26.014381 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 7 00:09:26.014386 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 7 00:09:26.014391 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 7 00:09:26.014397 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 7 00:09:26.014402 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 7 00:09:26.014408 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 7 00:09:26.014413 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 7 00:09:26.014418 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 7 00:09:26.014425 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 7 00:09:26.014430 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 7 00:09:26.014436 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 7 00:09:26.014441 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 7 00:09:26.014447 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 7 00:09:26.014452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:09:26.014457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:09:26.014463 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:09:26.014468 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:09:26.014475 kernel: TSC deadline timer available Jul 7 00:09:26.014480 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jul 7 00:09:26.014485 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 7 00:09:26.014491 kernel: Booting paravirtualized kernel on bare hardware Jul 7 00:09:26.014496 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:09:26.014502 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 00:09:26.014507 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 7 00:09:26.014513 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 7 00:09:26.014518 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 00:09:26.014525 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:26.014531 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:09:26.014536 kernel: random: crng init done Jul 7 00:09:26.014541 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 7 00:09:26.014547 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 7 00:09:26.014552 kernel: Fallback order for Node 0: 0 Jul 7 00:09:26.014558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Jul 7 00:09:26.014563 kernel: Policy zone: Normal Jul 7 00:09:26.014570 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:09:26.014575 kernel: software IO TLB: area num 16. Jul 7 00:09:26.014581 kernel: Memory: 32720316K/33452984K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 732408K reserved, 0K cma-reserved) Jul 7 00:09:26.014586 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 00:09:26.014592 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 00:09:26.014597 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 00:09:26.014603 kernel: Dynamic Preempt: voluntary Jul 7 00:09:26.014608 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:09:26.014614 kernel: rcu: RCU event tracing is enabled. Jul 7 00:09:26.014621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 00:09:26.014626 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:09:26.014631 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:09:26.014637 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:09:26.014642 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:09:26.014648 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 00:09:26.014653 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 7 00:09:26.014659 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:09:26.014664 kernel: Console: colour dummy device 80x25 Jul 7 00:09:26.014670 kernel: printk: console [tty0] enabled Jul 7 00:09:26.014676 kernel: printk: console [ttyS1] enabled Jul 7 00:09:26.014681 kernel: ACPI: Core revision 20230628 Jul 7 00:09:26.014687 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 7 00:09:26.014692 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:09:26.014698 kernel: DMAR: Host address width 39 Jul 7 00:09:26.014703 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 7 00:09:26.014709 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 7 00:09:26.014714 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Jul 7 00:09:26.014720 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 7 00:09:26.014726 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 7 00:09:26.014731 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 7 00:09:26.014737 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 7 00:09:26.014742 kernel: x2apic enabled Jul 7 00:09:26.014748 kernel: APIC: Switched APIC routing to: cluster x2apic Jul 7 00:09:26.014753 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 7 00:09:26.014759 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 7 00:09:26.014764 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 7 00:09:26.014770 kernel: process: using mwait in idle threads Jul 7 00:09:26.014776 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 00:09:26.014781 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 00:09:26.014787 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:09:26.014792 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 7 00:09:26.014798 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 7 00:09:26.014803 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 7 00:09:26.014809 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 7 00:09:26.014814 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 7 00:09:26.014820 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:09:26.014825 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:09:26.014830 kernel: TAA: Mitigation: TSX disabled Jul 7 00:09:26.014837 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 7 00:09:26.014842 kernel: SRBDS: Mitigation: Microcode Jul 7 00:09:26.014847 kernel: GDS: Mitigation: Microcode Jul 7 00:09:26.014853 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:09:26.014858 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:09:26.014863 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:09:26.014869 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:09:26.014874 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 00:09:26.014880 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 00:09:26.014885 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:09:26.014890 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 00:09:26.014897 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 00:09:26.014902 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 7 00:09:26.014908 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:09:26.014913 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:09:26.014918 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 00:09:26.014924 kernel: landlock: Up and running. Jul 7 00:09:26.014929 kernel: SELinux: Initializing. Jul 7 00:09:26.014935 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:09:26.014940 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:09:26.014945 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 7 00:09:26.014951 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:09:26.014957 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:09:26.014963 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:09:26.014968 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 7 00:09:26.014974 kernel: ... version: 4 Jul 7 00:09:26.014979 kernel: ... bit width: 48 Jul 7 00:09:26.014985 kernel: ... generic registers: 4 Jul 7 00:09:26.014990 kernel: ... value mask: 0000ffffffffffff Jul 7 00:09:26.014995 kernel: ... max period: 00007fffffffffff Jul 7 00:09:26.015001 kernel: ... fixed-purpose events: 3 Jul 7 00:09:26.015007 kernel: ... event mask: 000000070000000f Jul 7 00:09:26.015013 kernel: signal: max sigframe size: 2032 Jul 7 00:09:26.015018 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 7 00:09:26.015023 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:09:26.015029 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:09:26.015034 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 7 00:09:26.015040 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:09:26.015045 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:09:26.015051 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jul 7 00:09:26.015057 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 00:09:26.015063 kernel: smp: Brought up 1 node, 16 CPUs Jul 7 00:09:26.015068 kernel: smpboot: Max logical packages: 1 Jul 7 00:09:26.015074 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 7 00:09:26.015079 kernel: devtmpfs: initialized Jul 7 00:09:26.015085 kernel: x86/mm: Memory block size: 128MB Jul 7 00:09:26.015090 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a4f000-0x81a4ffff] (4096 bytes) Jul 7 00:09:26.015096 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Jul 7 00:09:26.015101 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:09:26.015108 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 00:09:26.015113 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:09:26.015118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:09:26.015124 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:09:26.015131 kernel: audit: type=2000 audit(1751846960.039:1): state=initialized audit_enabled=0 res=1 Jul 7 00:09:26.015136 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:09:26.015142 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:09:26.015147 kernel: cpuidle: using governor menu Jul 7 00:09:26.015154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:09:26.015159 kernel: dca service started, version 1.12.1 Jul 7 00:09:26.015165 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 7 00:09:26.015170 kernel: PCI: Using configuration type 1 for base access Jul 7 00:09:26.015175 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 7 00:09:26.015181 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:09:26.015186 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:09:26.015192 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:09:26.015197 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:09:26.015203 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:09:26.015209 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:09:26.015214 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:09:26.015220 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:09:26.015225 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 7 00:09:26.015231 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015236 kernel: ACPI: SSDT 0xFFFF985FC1AF2C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 7 00:09:26.015242 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015247 kernel: ACPI: SSDT 0xFFFF985FC1AED800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 7 00:09:26.015253 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015259 kernel: ACPI: SSDT 0xFFFF985FC0247E00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 7 00:09:26.015264 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015270 kernel: ACPI: SSDT 0xFFFF985FC1E5C000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 7 00:09:26.015275 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015280 kernel: ACPI: SSDT 0xFFFF985FC012D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 7 00:09:26.015286 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015291 kernel: ACPI: SSDT 0xFFFF985FC1AF1400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 7 00:09:26.015297 kernel: ACPI: _OSC evaluated successfully for all CPUs Jul 7 00:09:26.015302 kernel: ACPI: Interpreter enabled Jul 7 00:09:26.015308 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:09:26.015314 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:09:26.015319 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 7 00:09:26.015325 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 7 00:09:26.015330 kernel: HEST: Table parsing has been initialized. Jul 7 00:09:26.015335 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 7 00:09:26.015341 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:09:26.015346 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 00:09:26.015351 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 7 00:09:26.015358 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jul 7 00:09:26.015364 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jul 7 00:09:26.015369 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jul 7 00:09:26.015374 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jul 7 00:09:26.015380 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jul 7 00:09:26.015385 kernel: ACPI: \_TZ_.FN00: New power resource Jul 7 00:09:26.015391 kernel: ACPI: \_TZ_.FN01: New power resource Jul 7 00:09:26.015396 kernel: ACPI: \_TZ_.FN02: New power resource Jul 7 00:09:26.015402 kernel: ACPI: \_TZ_.FN03: New power resource Jul 7 00:09:26.015408 kernel: ACPI: \_TZ_.FN04: New power resource Jul 7 00:09:26.015414 kernel: ACPI: \PIN_: New power resource Jul 7 00:09:26.015419 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 7 00:09:26.015494 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:09:26.015549 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 7 00:09:26.015600 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 7 00:09:26.015608 kernel: PCI host bridge to bus 0000:00 Jul 7 00:09:26.015662 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:09:26.015709 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:09:26.015752 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:09:26.015796 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 7 00:09:26.015839 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 7 00:09:26.015883 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 7 00:09:26.015942 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jul 7 00:09:26.016005 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jul 7 00:09:26.016056 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.016111 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jul 7 00:09:26.016164 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jul 7 00:09:26.016218 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jul 7 00:09:26.016267 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jul 7 00:09:26.016324 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jul 7 00:09:26.016374 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jul 7 00:09:26.016423 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 7 00:09:26.016476 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jul 7 00:09:26.016525 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jul 7 00:09:26.016575 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jul 7 00:09:26.016630 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jul 7 00:09:26.016680 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 7 00:09:26.016736 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jul 7 00:09:26.016787 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 7 00:09:26.016839 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jul 7 00:09:26.016889 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jul 7 00:09:26.016940 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 7 00:09:26.016993 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jul 7 00:09:26.017042 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jul 7 00:09:26.017101 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 7 00:09:26.017157 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jul 7 00:09:26.017208 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jul 7 00:09:26.017257 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 7 00:09:26.017312 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jul 7 00:09:26.017362 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jul 7 00:09:26.017411 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jul 7 00:09:26.017461 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jul 7 00:09:26.017509 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jul 7 00:09:26.017558 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jul 7 00:09:26.017606 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jul 7 00:09:26.017658 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 7 00:09:26.017712 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jul 7 00:09:26.017765 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.017824 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jul 7 00:09:26.017877 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.017930 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jul 7 00:09:26.017983 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.018037 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jul 7 00:09:26.018087 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.018145 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jul 7 00:09:26.018239 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.018294 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jul 7 00:09:26.018343 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 7 00:09:26.018397 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jul 7 00:09:26.018452 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jul 7 00:09:26.018503 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jul 7 00:09:26.018555 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jul 7 00:09:26.018610 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jul 7 00:09:26.018659 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jul 7 00:09:26.018717 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jul 7 00:09:26.018770 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jul 7 00:09:26.018821 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jul 7 00:09:26.018873 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 7 00:09:26.018926 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 7 00:09:26.018978 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 7 00:09:26.019033 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jul 7 00:09:26.019086 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jul 7 00:09:26.019140 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jul 7 00:09:26.019191 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 7 00:09:26.019243 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 7 00:09:26.019296 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 7 00:09:26.019347 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:09:26.019396 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 7 00:09:26.019447 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:09:26.019497 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 7 00:09:26.019553 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 7 00:09:26.019604 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jul 7 00:09:26.019659 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jul 7 00:09:26.019711 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jul 7 00:09:26.019762 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jul 7 00:09:26.019813 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.019864 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 7 00:09:26.019914 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 7 00:09:26.019964 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 7 00:09:26.020022 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 7 00:09:26.020073 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jul 7 00:09:26.020127 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jul 7 00:09:26.020179 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jul 7 00:09:26.020231 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jul 7 00:09:26.020283 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.020333 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 7 00:09:26.020384 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 7 00:09:26.020436 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 7 00:09:26.020487 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 7 00:09:26.020542 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jul 7 00:09:26.020595 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 7 00:09:26.020645 kernel: pci 0000:06:00.0: supports D1 D2 Jul 7 00:09:26.020696 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 00:09:26.020746 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 7 00:09:26.020799 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 7 00:09:26.020849 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:09:26.020905 kernel: pci_bus 0000:07: extended config space not accessible Jul 7 00:09:26.020963 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jul 7 00:09:26.021017 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jul 7 00:09:26.021072 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jul 7 00:09:26.021127 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jul 7 00:09:26.021183 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:09:26.021236 kernel: pci 0000:07:00.0: supports D1 D2 Jul 7 00:09:26.021288 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 00:09:26.021340 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 7 00:09:26.021391 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 7 00:09:26.021442 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:09:26.021451 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 7 00:09:26.021459 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 7 00:09:26.021465 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 7 00:09:26.021471 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 7 00:09:26.021477 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 7 00:09:26.021483 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 7 00:09:26.021489 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 7 00:09:26.021494 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 7 00:09:26.021500 kernel: iommu: Default domain type: Translated Jul 7 00:09:26.021506 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:09:26.021512 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:09:26.021518 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:09:26.021524 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 7 00:09:26.021530 kernel: e820: reserve RAM buffer [mem 0x81a4f000-0x83ffffff] Jul 7 00:09:26.021535 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Jul 7 00:09:26.021541 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Jul 7 00:09:26.021546 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 7 00:09:26.021552 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 7 00:09:26.021604 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 7 00:09:26.021656 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 7 00:09:26.021712 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:09:26.021721 kernel: vgaarb: loaded Jul 7 00:09:26.021727 kernel: clocksource: Switched to clocksource tsc-early Jul 7 00:09:26.021733 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:09:26.021739 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:09:26.021744 kernel: pnp: PnP ACPI init Jul 7 00:09:26.021796 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 7 00:09:26.021847 kernel: pnp 00:02: [dma 0 disabled] Jul 7 00:09:26.021899 kernel: pnp 00:03: [dma 0 disabled] Jul 7 00:09:26.021952 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 7 00:09:26.021997 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 7 00:09:26.022047 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Jul 7 00:09:26.022092 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Jul 7 00:09:26.022142 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Jul 7 00:09:26.022191 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Jul 7 00:09:26.022237 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 7 00:09:26.022285 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 7 00:09:26.022331 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 7 00:09:26.022377 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 7 00:09:26.022427 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Jul 7 00:09:26.022475 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 7 00:09:26.022522 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 7 00:09:26.022568 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 7 00:09:26.022613 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 7 00:09:26.022658 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 7 00:09:26.022704 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Jul 7 00:09:26.022753 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Jul 7 00:09:26.022762 kernel: pnp: PnP ACPI: found 9 devices Jul 7 00:09:26.022770 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:09:26.022776 kernel: NET: Registered PF_INET protocol family Jul 7 00:09:26.022782 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:09:26.022787 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 7 00:09:26.022793 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:09:26.022799 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:09:26.022805 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:09:26.022811 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 7 00:09:26.022816 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:09:26.022823 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:09:26.022829 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:09:26.022835 kernel: NET: Registered PF_XDP protocol family Jul 7 00:09:26.022885 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jul 7 00:09:26.022936 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jul 7 00:09:26.022987 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jul 7 00:09:26.023040 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 7 00:09:26.023091 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 7 00:09:26.023148 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 7 00:09:26.023200 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 7 00:09:26.023250 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:09:26.023300 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 7 00:09:26.023348 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:09:26.023399 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 7 00:09:26.023450 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 7 00:09:26.023500 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 7 00:09:26.023549 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 7 00:09:26.023599 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 7 00:09:26.023650 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 7 00:09:26.023699 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 7 00:09:26.023749 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 7 00:09:26.023801 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 7 00:09:26.023853 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 7 00:09:26.023904 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:09:26.023955 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 7 00:09:26.024005 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 7 00:09:26.024055 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:09:26.024101 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 00:09:26.024149 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:09:26.024196 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:09:26.024241 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:09:26.024284 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 7 00:09:26.024329 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 7 00:09:26.024378 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 7 00:09:26.024425 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:09:26.024475 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 7 00:09:26.024524 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 7 00:09:26.024577 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 7 00:09:26.024623 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 7 00:09:26.024673 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 7 00:09:26.024719 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 7 00:09:26.024767 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 7 00:09:26.024814 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 7 00:09:26.024824 kernel: PCI: CLS 64 bytes, default 64 Jul 7 00:09:26.024830 kernel: DMAR: No ATSR found Jul 7 00:09:26.024836 kernel: DMAR: No SATC found Jul 7 00:09:26.024842 kernel: DMAR: dmar0: Using Queued invalidation Jul 7 00:09:26.024892 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 7 00:09:26.024942 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 7 00:09:26.024994 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 7 00:09:26.025044 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 7 00:09:26.025097 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 7 00:09:26.025149 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 7 00:09:26.025199 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 7 00:09:26.025249 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 7 00:09:26.025298 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 7 00:09:26.025348 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 7 00:09:26.025397 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 7 00:09:26.025447 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 7 00:09:26.025499 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 7 00:09:26.025549 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 7 00:09:26.025598 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 7 00:09:26.025648 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 7 00:09:26.025697 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 7 00:09:26.025746 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 7 00:09:26.025796 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 7 00:09:26.025846 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 7 00:09:26.025898 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 7 00:09:26.025950 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 7 00:09:26.026001 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 7 00:09:26.026053 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 7 00:09:26.026104 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 7 00:09:26.026186 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 7 00:09:26.026261 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 7 00:09:26.026269 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 7 00:09:26.026275 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 00:09:26.026283 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Jul 7 00:09:26.026289 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 7 00:09:26.026295 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 7 00:09:26.026301 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 7 00:09:26.026306 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 7 00:09:26.026358 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 7 00:09:26.026367 kernel: Initialise system trusted keyrings Jul 7 00:09:26.026373 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 7 00:09:26.026381 kernel: Key type asymmetric registered Jul 7 00:09:26.026386 kernel: Asymmetric key parser 'x509' registered Jul 7 00:09:26.026392 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 00:09:26.026398 kernel: io scheduler mq-deadline registered Jul 7 00:09:26.026403 kernel: io scheduler kyber registered Jul 7 00:09:26.026409 kernel: io scheduler bfq registered Jul 7 00:09:26.026459 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 7 00:09:26.026508 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 7 00:09:26.026559 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 7 00:09:26.026611 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 7 00:09:26.026662 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 7 00:09:26.026712 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 7 00:09:26.026768 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 7 00:09:26.026777 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 7 00:09:26.026783 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 7 00:09:26.026789 kernel: pstore: Using crash dump compression: deflate Jul 7 00:09:26.026797 kernel: pstore: Registered erst as persistent store backend Jul 7 00:09:26.026802 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:09:26.026808 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:09:26.026814 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:09:26.026820 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 00:09:26.026826 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 7 00:09:26.026879 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 7 00:09:26.026888 kernel: i8042: PNP: No PS/2 controller found. Jul 7 00:09:26.026933 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 7 00:09:26.026983 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 7 00:09:26.027028 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-07-07T00:09:24 UTC (1751846964) Jul 7 00:09:26.027075 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 7 00:09:26.027083 kernel: intel_pstate: Intel P-state driver initializing Jul 7 00:09:26.027089 kernel: intel_pstate: Disabling energy efficiency optimization Jul 7 00:09:26.027095 kernel: intel_pstate: HWP enabled Jul 7 00:09:26.027101 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jul 7 00:09:26.027107 kernel: vesafb: scrolling: redraw Jul 7 00:09:26.027114 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jul 7 00:09:26.027120 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000001123de9b, using 768k, total 768k Jul 7 00:09:26.027129 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 00:09:26.027134 kernel: fb0: VESA VGA frame buffer device Jul 7 00:09:26.027140 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:09:26.027146 kernel: Segment Routing with IPv6 Jul 7 00:09:26.027152 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:09:26.027158 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:09:26.027163 kernel: Key type dns_resolver registered Jul 7 00:09:26.027170 kernel: microcode: Current revision: 0x00000102 Jul 7 00:09:26.027176 kernel: microcode: Microcode Update Driver: v2.2. Jul 7 00:09:26.027182 kernel: IPI shorthand broadcast: enabled Jul 7 00:09:26.027188 kernel: sched_clock: Marking stable (1561000636, 1379331553)->(4400963659, -1460631470) Jul 7 00:09:26.027193 kernel: registered taskstats version 1 Jul 7 00:09:26.027199 kernel: Loading compiled-in X.509 certificates Jul 7 00:09:26.027205 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 00:09:26.027210 kernel: Key type .fscrypt registered Jul 7 00:09:26.027216 kernel: Key type fscrypt-provisioning registered Jul 7 00:09:26.027223 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:09:26.027229 kernel: ima: No architecture policies found Jul 7 00:09:26.027234 kernel: clk: Disabling unused clocks Jul 7 00:09:26.027240 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 00:09:26.027246 kernel: Write protecting the kernel read-only data: 36864k Jul 7 00:09:26.027252 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 00:09:26.027257 kernel: Run /init as init process Jul 7 00:09:26.027263 kernel: with arguments: Jul 7 00:09:26.027269 kernel: /init Jul 7 00:09:26.027275 kernel: with environment: Jul 7 00:09:26.027281 kernel: HOME=/ Jul 7 00:09:26.027286 kernel: TERM=linux Jul 7 00:09:26.027292 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:09:26.027299 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:09:26.027306 systemd[1]: Detected architecture x86-64. Jul 7 00:09:26.027312 systemd[1]: Running in initrd. Jul 7 00:09:26.027319 systemd[1]: No hostname configured, using default hostname. Jul 7 00:09:26.027325 systemd[1]: Hostname set to . Jul 7 00:09:26.027331 systemd[1]: Initializing machine ID from random generator. Jul 7 00:09:26.027337 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:09:26.027343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:26.027349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:26.027355 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:09:26.027361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:09:26.027368 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:09:26.027374 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:09:26.027381 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:09:26.027387 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:09:26.027393 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jul 7 00:09:26.027399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:26.027405 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jul 7 00:09:26.027412 kernel: clocksource: Switched to clocksource tsc Jul 7 00:09:26.027418 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:26.027424 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:09:26.027430 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:09:26.027436 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:09:26.027442 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:09:26.027448 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:09:26.027454 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:09:26.027460 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:09:26.027467 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:09:26.027473 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:26.027479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:26.027485 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:26.027491 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:09:26.027497 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:09:26.027503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:09:26.027509 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:09:26.027516 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:09:26.027522 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:09:26.027538 systemd-journald[267]: Collecting audit messages is disabled. Jul 7 00:09:26.027552 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:09:26.027560 systemd-journald[267]: Journal started Jul 7 00:09:26.027574 systemd-journald[267]: Runtime Journal (/run/log/journal/4a4a0d013488432583207b0b378b27ee) is 8.0M, max 639.9M, 631.9M free. Jul 7 00:09:26.041934 systemd-modules-load[269]: Inserted module 'overlay' Jul 7 00:09:26.071256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:26.117173 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:09:26.138159 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:09:26.138180 kernel: Bridge firewalling registered Jul 7 00:09:26.154265 systemd-modules-load[269]: Inserted module 'br_netfilter' Jul 7 00:09:26.165648 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:09:26.174569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:26.174678 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:09:26.174779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:26.192484 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:09:26.254479 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:09:26.256944 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:09:26.282750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:26.315551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:26.335469 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:09:26.356509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:26.393417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:26.405306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:09:26.405884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:09:26.411447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:26.419421 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:26.427311 systemd-resolved[296]: Positive Trust Anchors: Jul 7 00:09:26.427317 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:09:26.427349 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:09:26.429509 systemd-resolved[296]: Defaulting to hostname 'linux'. Jul 7 00:09:26.430344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:09:26.451360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:26.481688 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:09:26.589021 dracut-cmdline[311]: dracut-dracut-053 Jul 7 00:09:26.596352 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:26.775137 kernel: SCSI subsystem initialized Jul 7 00:09:26.799131 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:09:26.822132 kernel: iscsi: registered transport (tcp) Jul 7 00:09:26.855913 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:09:26.855931 kernel: QLogic iSCSI HBA Driver Jul 7 00:09:26.888768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:09:26.916393 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:09:26.974908 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:09:26.974931 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:09:26.994832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 00:09:27.053197 kernel: raid6: avx2x4 gen() 53189 MB/s Jul 7 00:09:27.085158 kernel: raid6: avx2x2 gen() 53777 MB/s Jul 7 00:09:27.121705 kernel: raid6: avx2x1 gen() 45146 MB/s Jul 7 00:09:27.121724 kernel: raid6: using algorithm avx2x2 gen() 53777 MB/s Jul 7 00:09:27.169785 kernel: raid6: .... xor() 31735 MB/s, rmw enabled Jul 7 00:09:27.169802 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:09:27.211163 kernel: xor: automatically using best checksumming function avx Jul 7 00:09:27.325162 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:09:27.330755 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:09:27.353431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:27.360360 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jul 7 00:09:27.364368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:27.400343 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:09:27.466761 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 7 00:09:27.537737 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:09:27.557553 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:09:27.646924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:27.661133 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:09:27.661169 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 00:09:27.687133 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 00:09:27.717549 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:09:27.753282 kernel: PTP clock support registered Jul 7 00:09:27.753299 kernel: libata version 3.00 loaded. Jul 7 00:09:27.753308 kernel: ACPI: bus type USB registered Jul 7 00:09:27.753320 kernel: AVX2 version of gcm_enc/dec engaged. Jul 7 00:09:27.753328 kernel: usbcore: registered new interface driver usbfs Jul 7 00:09:27.728105 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:09:27.823907 kernel: usbcore: registered new interface driver hub Jul 7 00:09:27.823926 kernel: usbcore: registered new device driver usb Jul 7 00:09:27.823935 kernel: AES CTR mode by8 optimization enabled Jul 7 00:09:27.810308 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:09:27.834284 kernel: ahci 0000:00:17.0: version 3.0 Jul 7 00:09:27.834167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:27.874046 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jul 7 00:09:27.874139 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 7 00:09:27.871170 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:09:27.955202 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jul 7 00:09:27.955310 kernel: scsi host0: ahci Jul 7 00:09:27.955381 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 7 00:09:27.955450 kernel: scsi host1: ahci Jul 7 00:09:27.955518 kernel: scsi host2: ahci Jul 7 00:09:27.899162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:09:27.978871 kernel: scsi host3: ahci Jul 7 00:09:27.978958 kernel: scsi host4: ahci Jul 7 00:09:27.899204 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:28.006242 kernel: scsi host5: ahci Jul 7 00:09:27.992725 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:28.134006 kernel: scsi host6: ahci Jul 7 00:09:28.134136 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jul 7 00:09:28.134150 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jul 7 00:09:28.134160 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jul 7 00:09:28.134167 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jul 7 00:09:28.134174 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jul 7 00:09:28.134183 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jul 7 00:09:28.134191 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jul 7 00:09:28.155706 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 7 00:09:28.155796 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 7 00:09:28.190596 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 7 00:09:28.190689 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 7 00:09:28.190781 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 7 00:09:28.203052 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jul 7 00:09:28.203145 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 7 00:09:28.223947 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 7 00:09:28.223965 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 7 00:09:28.224045 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 7 00:09:28.270185 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 7 00:09:28.270277 kernel: hub 1-0:1.0: USB hub found Jul 7 00:09:28.271264 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:09:28.699219 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 00:09:28.699313 kernel: hub 1-0:1.0: 16 ports detected Jul 7 00:09:28.699384 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:d4 Jul 7 00:09:28.699452 kernel: hub 2-0:1.0: USB hub found Jul 7 00:09:28.699526 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 7 00:09:28.699593 kernel: hub 2-0:1.0: 10 ports detected Jul 7 00:09:28.699654 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 7 00:09:28.699718 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.699727 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 7 00:09:28.699794 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.699802 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 00:09:28.699865 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.699875 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:d5 Jul 7 00:09:28.699939 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.699947 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 7 00:09:28.700009 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 7 00:09:28.700017 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 7 00:09:28.700079 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 00:09:28.700149 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jul 7 00:09:28.700217 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 7 00:09:28.700282 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 7 00:09:28.700290 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 7 00:09:28.700362 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 7 00:09:28.700370 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.700378 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 7 00:09:28.700385 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 7 00:09:28.700393 kernel: hub 1-14:1.0: USB hub found Jul 7 00:09:28.682245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:28.882264 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 7 00:09:28.882275 kernel: hub 1-14:1.0: 4 ports detected Jul 7 00:09:28.882354 kernel: ata1.00: Features: NCQ-prio Jul 7 00:09:28.882363 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 7 00:09:28.882432 kernel: ata2.00: Features: NCQ-prio Jul 7 00:09:28.882440 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 7 00:09:28.882508 kernel: ata1.00: configured for UDMA/133 Jul 7 00:09:28.882516 kernel: ata2.00: configured for UDMA/133 Jul 7 00:09:28.882523 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 7 00:09:28.882592 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 7 00:09:28.682282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:28.915184 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 7 00:09:28.915285 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 00:09:28.864320 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:29.303979 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.303997 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 7 00:09:29.304131 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jul 7 00:09:29.304203 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 7 00:09:29.304271 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 00:09:29.304333 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 00:09:29.304395 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 7 00:09:29.304456 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:09:29.304517 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jul 7 00:09:29.304578 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.304586 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 7 00:09:29.304659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:09:29.304668 kernel: GPT:9289727 != 937703087 Jul 7 00:09:29.304675 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:09:29.304683 kernel: GPT:9289727 != 937703087 Jul 7 00:09:29.304690 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:09:29.304696 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:29.304704 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 00:09:29.304765 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 7 00:09:29.304872 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jul 7 00:09:29.304937 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 7 00:09:29.304997 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:09:29.305059 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 00:09:29.305130 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jul 7 00:09:29.305199 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 00:09:29.305208 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jul 7 00:09:28.936287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:29.304113 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:09:29.327135 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:09:29.327156 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Jul 7 00:09:29.351338 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jul 7 00:09:29.372679 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (690) Jul 7 00:09:29.372693 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (688) Jul 7 00:09:29.376238 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jul 7 00:09:29.412736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jul 7 00:09:29.526275 kernel: usbcore: registered new interface driver usbhid Jul 7 00:09:29.526314 kernel: usbhid: USB HID core driver Jul 7 00:09:29.526334 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 7 00:09:29.526358 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Jul 7 00:09:29.526556 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 7 00:09:29.502630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:29.576868 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 7 00:09:29.576882 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 7 00:09:29.546191 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jul 7 00:09:29.611944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jul 7 00:09:29.652415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:09:29.695215 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.695228 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:29.669633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:29.733205 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.733217 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:29.733225 disk-uuid[715]: Primary Header is updated. Jul 7 00:09:29.733225 disk-uuid[715]: Secondary Entries is updated. Jul 7 00:09:29.733225 disk-uuid[715]: Secondary Header is updated. Jul 7 00:09:29.770770 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.770779 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:29.777311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:30.750493 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:30.769853 disk-uuid[716]: The operation has completed successfully. Jul 7 00:09:30.778268 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:30.803104 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:09:30.803224 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:09:30.837390 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:09:30.875262 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 7 00:09:30.875328 sh[744]: Success Jul 7 00:09:30.907400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:09:30.918129 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:09:30.949199 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:09:30.989439 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 00:09:30.989458 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:31.010946 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 00:09:31.030050 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 00:09:31.048354 kernel: BTRFS info (device dm-0): using free space tree Jul 7 00:09:31.085175 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 00:09:31.087203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:09:31.087529 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:09:31.101441 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:09:31.102860 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:09:31.262284 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:31.262298 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:31.262306 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:31.262313 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:31.262324 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:31.262331 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:31.232787 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 00:09:31.271504 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:09:31.291319 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:09:31.317736 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:09:31.348257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:09:31.359798 systemd-networkd[927]: lo: Link UP Jul 7 00:09:31.357653 ignition[839]: Ignition 2.19.0 Jul 7 00:09:31.359800 unknown[839]: fetched base config from "system" Jul 7 00:09:31.357658 ignition[839]: Stage: fetch-offline Jul 7 00:09:31.359801 systemd-networkd[927]: lo: Gained carrier Jul 7 00:09:31.357683 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:31.359804 unknown[839]: fetched user config from "system" Jul 7 00:09:31.357688 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:31.360635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:09:31.357746 ignition[839]: parsed url from cmdline: "" Jul 7 00:09:31.362486 systemd-networkd[927]: Enumeration completed Jul 7 00:09:31.357748 ignition[839]: no config URL provided Jul 7 00:09:31.363370 systemd-networkd[927]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:31.357751 ignition[839]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:09:31.380443 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:09:31.357774 ignition[839]: parsing config with SHA512: 59190e8749afca63946c530e424e90ea340c33f058b416720346109b8c81e5cd10b0ea4b663d581cf8dcb2995bd9d263e0c6ec5135eb83f33ba8efa9ab0be806 Jul 7 00:09:31.391226 systemd-networkd[927]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:31.360018 ignition[839]: fetch-offline: fetch-offline passed Jul 7 00:09:31.399580 systemd[1]: Reached target network.target - Network. Jul 7 00:09:31.360021 ignition[839]: POST message to Packet Timeline Jul 7 00:09:31.419467 systemd-networkd[927]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:31.360024 ignition[839]: POST Status error: resource requires networking Jul 7 00:09:31.420376 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:09:31.360059 ignition[839]: Ignition finished successfully Jul 7 00:09:31.430369 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:09:31.448714 ignition[939]: Ignition 2.19.0 Jul 7 00:09:31.448724 ignition[939]: Stage: kargs Jul 7 00:09:31.647240 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 7 00:09:31.643651 systemd-networkd[927]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:31.448952 ignition[939]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:31.448967 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:31.450223 ignition[939]: kargs: kargs passed Jul 7 00:09:31.450229 ignition[939]: POST message to Packet Timeline Jul 7 00:09:31.450247 ignition[939]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:31.451180 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53659->[::1]:53: read: connection refused Jul 7 00:09:31.652211 ignition[939]: GET https://metadata.packet.net/metadata: attempt #2 Jul 7 00:09:31.653229 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55532->[::1]:53: read: connection refused Jul 7 00:09:31.929142 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 7 00:09:31.930391 systemd-networkd[927]: eno1: Link UP Jul 7 00:09:31.930690 systemd-networkd[927]: eno2: Link UP Jul 7 00:09:31.930826 systemd-networkd[927]: enp1s0f0np0: Link UP Jul 7 00:09:31.930982 systemd-networkd[927]: enp1s0f0np0: Gained carrier Jul 7 00:09:31.939373 systemd-networkd[927]: enp1s0f1np1: Link UP Jul 7 00:09:31.971287 systemd-networkd[927]: enp1s0f0np0: DHCPv4 address 147.28.180.255/31, gateway 147.28.180.254 acquired from 145.40.83.140 Jul 7 00:09:32.054298 ignition[939]: GET https://metadata.packet.net/metadata: attempt #3 Jul 7 00:09:32.055454 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34378->[::1]:53: read: connection refused Jul 7 00:09:32.650864 systemd-networkd[927]: enp1s0f1np1: Gained carrier Jul 7 00:09:32.855947 ignition[939]: GET https://metadata.packet.net/metadata: attempt #4 Jul 7 00:09:32.857275 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43765->[::1]:53: read: connection refused Jul 7 00:09:33.098739 systemd-networkd[927]: enp1s0f0np0: Gained IPv6LL Jul 7 00:09:34.122731 systemd-networkd[927]: enp1s0f1np1: Gained IPv6LL Jul 7 00:09:34.458305 ignition[939]: GET https://metadata.packet.net/metadata: attempt #5 Jul 7 00:09:34.459480 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46429->[::1]:53: read: connection refused Jul 7 00:09:37.662167 ignition[939]: GET https://metadata.packet.net/metadata: attempt #6 Jul 7 00:09:38.766889 ignition[939]: GET result: OK Jul 7 00:09:39.233912 ignition[939]: Ignition finished successfully Jul 7 00:09:39.239970 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:09:39.265376 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:09:39.271508 ignition[959]: Ignition 2.19.0 Jul 7 00:09:39.271512 ignition[959]: Stage: disks Jul 7 00:09:39.271619 ignition[959]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:39.271626 ignition[959]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:39.272141 ignition[959]: disks: disks passed Jul 7 00:09:39.272144 ignition[959]: POST message to Packet Timeline Jul 7 00:09:39.272152 ignition[959]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:40.341569 ignition[959]: GET result: OK Jul 7 00:09:40.677548 ignition[959]: Ignition finished successfully Jul 7 00:09:40.679369 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:09:40.695336 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:09:40.713378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:09:40.734373 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:09:40.756548 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:09:40.777533 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:09:40.807652 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:09:40.834895 systemd-fsck[975]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 00:09:40.846275 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:09:40.879360 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:09:40.979174 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 00:09:40.979152 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:09:40.987553 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:09:41.019337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:09:41.027663 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:09:41.159242 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (984) Jul 7 00:09:41.159258 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:41.159266 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:41.159336 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:41.159346 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:41.159354 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:41.049051 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:09:41.112243 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 7 00:09:41.170404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:09:41.230411 coreos-metadata[986]: Jul 07 00:09:41.219 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:09:41.170582 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:09:41.273220 coreos-metadata[1000]: Jul 07 00:09:41.219 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:09:41.195793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:09:41.212427 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:09:41.303255 initrd-setup-root[1016]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:09:41.248352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:09:41.323250 initrd-setup-root[1023]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:09:41.333256 initrd-setup-root[1030]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:09:41.343403 initrd-setup-root[1037]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:09:41.338768 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:09:41.367292 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:09:41.409339 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:41.390094 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:09:41.418996 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:09:41.435630 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:09:41.452286 ignition[1104]: INFO : Ignition 2.19.0 Jul 7 00:09:41.452286 ignition[1104]: INFO : Stage: mount Jul 7 00:09:41.452286 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:41.452286 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:41.452286 ignition[1104]: INFO : mount: mount passed Jul 7 00:09:41.452286 ignition[1104]: INFO : POST message to Packet Timeline Jul 7 00:09:41.452286 ignition[1104]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:42.281857 coreos-metadata[1000]: Jul 07 00:09:42.281 INFO Fetch successful Jul 7 00:09:42.364699 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 7 00:09:42.364760 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 7 00:09:42.402100 ignition[1104]: INFO : GET result: OK Jul 7 00:09:42.707910 coreos-metadata[986]: Jul 07 00:09:42.707 INFO Fetch successful Jul 7 00:09:42.773013 ignition[1104]: INFO : Ignition finished successfully Jul 7 00:09:42.774007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:09:42.792553 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:09:42.802223 coreos-metadata[986]: Jul 07 00:09:42.786 INFO wrote hostname ci-4081.3.4-a-fd0ee851f3 to /sysroot/etc/hostname Jul 7 00:09:42.809442 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:09:42.862443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:09:42.902231 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1130) Jul 7 00:09:42.902250 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:42.922980 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:42.941382 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:42.980391 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:42.980408 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:42.994355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:09:43.021696 ignition[1147]: INFO : Ignition 2.19.0 Jul 7 00:09:43.021696 ignition[1147]: INFO : Stage: files Jul 7 00:09:43.037368 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:43.037368 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:43.037368 ignition[1147]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:09:43.037368 ignition[1147]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:09:43.037368 ignition[1147]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:09:43.025139 unknown[1147]: wrote ssh authorized keys file for user: core Jul 7 00:09:43.175373 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:09:43.246773 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:09:43.246773 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:09:44.037755 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:09:44.355735 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:44.355735 ignition[1147]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: files passed Jul 7 00:09:44.385384 ignition[1147]: INFO : POST message to Packet Timeline Jul 7 00:09:44.385384 ignition[1147]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:46.187166 ignition[1147]: INFO : GET result: OK Jul 7 00:09:46.656253 ignition[1147]: INFO : Ignition finished successfully Jul 7 00:09:46.659426 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:09:46.693435 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:09:46.693864 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:09:46.723644 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:09:46.723735 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:09:46.769004 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:46.769004 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:46.807376 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:46.773561 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:09:46.784449 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:09:46.830359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:09:46.891469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:09:46.891521 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:09:46.910608 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:09:46.921461 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:09:46.948413 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:09:46.962583 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:09:47.034426 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:09:47.057565 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:09:47.115716 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:47.127755 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:47.148817 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:09:47.166731 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:09:47.167157 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:09:47.195962 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:09:47.216751 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:09:47.234726 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:09:47.253862 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:09:47.274757 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:09:47.295754 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:09:47.315743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:09:47.336778 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:09:47.357872 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:09:47.377744 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:09:47.395632 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:09:47.396029 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:09:47.422849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:47.443772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:47.464628 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:09:47.465048 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:47.486634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:09:47.487033 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:09:47.517730 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:09:47.518204 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:09:47.537956 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:09:47.555615 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:09:47.556045 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:47.576748 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:09:47.594709 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:09:47.612834 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:09:47.613165 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:09:47.632735 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:09:47.633036 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:09:47.656937 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:09:47.657374 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:09:47.676826 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:09:47.788289 ignition[1211]: INFO : Ignition 2.19.0 Jul 7 00:09:47.788289 ignition[1211]: INFO : Stage: umount Jul 7 00:09:47.788289 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:47.788289 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:47.788289 ignition[1211]: INFO : umount: umount passed Jul 7 00:09:47.788289 ignition[1211]: INFO : POST message to Packet Timeline Jul 7 00:09:47.788289 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:47.677228 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:09:47.694831 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:09:47.695237 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:09:47.724385 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:09:47.748340 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:09:47.748480 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:47.786401 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:09:47.796286 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:09:47.796559 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:47.820416 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:09:47.820495 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:09:47.872216 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:09:47.872577 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:09:47.872630 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:09:47.888220 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:09:47.888274 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:09:48.753352 ignition[1211]: INFO : GET result: OK Jul 7 00:09:49.149418 ignition[1211]: INFO : Ignition finished successfully Jul 7 00:09:49.152396 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:09:49.152698 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:09:49.171552 systemd[1]: Stopped target network.target - Network. Jul 7 00:09:49.186393 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:09:49.186658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:09:49.204553 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:09:49.204704 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:09:49.223655 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:09:49.223819 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:09:49.231887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:09:49.232052 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:09:49.259622 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:09:49.259795 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:09:49.268324 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:09:49.277211 systemd-networkd[927]: enp1s0f0np0: DHCPv6 lease lost Jul 7 00:09:49.286371 systemd-networkd[927]: enp1s0f1np1: DHCPv6 lease lost Jul 7 00:09:49.295718 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:09:49.315372 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:09:49.315742 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:09:49.335690 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:09:49.336058 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:09:49.356274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:09:49.356388 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:49.392366 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:09:49.412285 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:09:49.412328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:09:49.432431 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:09:49.432523 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:49.450525 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:09:49.450691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:49.470520 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:09:49.470690 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:49.489763 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:49.512470 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:09:49.512848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:49.545243 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:09:49.545392 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:49.549660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:09:49.549765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:49.577431 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:09:49.577593 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:09:49.607725 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:09:49.607894 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:09:49.637518 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:09:49.637660 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:49.688247 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:09:49.707207 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:09:49.707250 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:49.730312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:49.958362 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Jul 7 00:09:49.730393 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:49.750345 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:09:49.750584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:09:49.820394 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:09:49.820675 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:09:49.843371 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:09:49.879660 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:09:49.899186 systemd[1]: Switching root. Jul 7 00:09:50.031391 systemd-journald[267]: Journal stopped Jul 7 00:09:26.013689 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 00:09:26.013705 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:26.013712 kernel: BIOS-provided physical RAM map: Jul 7 00:09:26.013716 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 7 00:09:26.013720 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 7 00:09:26.013724 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 7 00:09:26.013729 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 7 00:09:26.013733 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 7 00:09:26.013737 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a4efff] usable Jul 7 00:09:26.013741 kernel: BIOS-e820: [mem 0x0000000081a4f000-0x0000000081a4ffff] ACPI NVS Jul 7 00:09:26.013745 kernel: BIOS-e820: [mem 0x0000000081a50000-0x0000000081a50fff] reserved Jul 7 00:09:26.013750 kernel: BIOS-e820: [mem 0x0000000081a51000-0x000000008afcdfff] usable Jul 7 00:09:26.013755 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Jul 7 00:09:26.013759 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Jul 7 00:09:26.013764 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Jul 7 00:09:26.013769 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Jul 7 00:09:26.013775 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 7 00:09:26.013779 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 7 00:09:26.013784 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 00:09:26.013789 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 7 00:09:26.013793 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 7 00:09:26.013798 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 7 00:09:26.013803 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 7 00:09:26.013807 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 7 00:09:26.013812 kernel: NX (Execute Disable) protection: active Jul 7 00:09:26.013816 kernel: APIC: Static calls initialized Jul 7 00:09:26.013821 kernel: SMBIOS 3.2.1 present. Jul 7 00:09:26.013826 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Jul 7 00:09:26.013832 kernel: tsc: Detected 3400.000 MHz processor Jul 7 00:09:26.013836 kernel: tsc: Detected 3399.906 MHz TSC Jul 7 00:09:26.013841 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:09:26.013846 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:09:26.013851 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 7 00:09:26.013856 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jul 7 00:09:26.013861 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:09:26.013866 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 7 00:09:26.013870 kernel: Using GB pages for direct mapping Jul 7 00:09:26.013876 kernel: ACPI: Early table checksum verification disabled Jul 7 00:09:26.013881 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 7 00:09:26.013886 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 7 00:09:26.013893 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Jul 7 00:09:26.013898 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 7 00:09:26.013903 kernel: ACPI: FACS 0x000000008C66DF80 000040 Jul 7 00:09:26.013908 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Jul 7 00:09:26.013914 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Jul 7 00:09:26.013919 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 7 00:09:26.013924 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 7 00:09:26.013929 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 7 00:09:26.013934 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 7 00:09:26.013939 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 7 00:09:26.013944 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 7 00:09:26.013951 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013956 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 7 00:09:26.013961 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 7 00:09:26.013966 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013971 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013976 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 7 00:09:26.013981 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 7 00:09:26.013986 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013991 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:09:26.013997 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 7 00:09:26.014002 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 7 00:09:26.014007 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 7 00:09:26.014012 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 7 00:09:26.014017 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 7 00:09:26.014023 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 7 00:09:26.014028 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 7 00:09:26.014033 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 7 00:09:26.014039 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 7 00:09:26.014044 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 7 00:09:26.014049 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 7 00:09:26.014054 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Jul 7 00:09:26.014059 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Jul 7 00:09:26.014064 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Jul 7 00:09:26.014069 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Jul 7 00:09:26.014074 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Jul 7 00:09:26.014079 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Jul 7 00:09:26.014085 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Jul 7 00:09:26.014090 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Jul 7 00:09:26.014095 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Jul 7 00:09:26.014100 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Jul 7 00:09:26.014105 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Jul 7 00:09:26.014110 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Jul 7 00:09:26.014115 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Jul 7 00:09:26.014120 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Jul 7 00:09:26.014128 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Jul 7 00:09:26.014134 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Jul 7 00:09:26.014139 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Jul 7 00:09:26.014144 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Jul 7 00:09:26.014149 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Jul 7 00:09:26.014154 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Jul 7 00:09:26.014159 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Jul 7 00:09:26.014164 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Jul 7 00:09:26.014169 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Jul 7 00:09:26.014174 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Jul 7 00:09:26.014180 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Jul 7 00:09:26.014185 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Jul 7 00:09:26.014190 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Jul 7 00:09:26.014195 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Jul 7 00:09:26.014200 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Jul 7 00:09:26.014205 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Jul 7 00:09:26.014210 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Jul 7 00:09:26.014215 kernel: No NUMA configuration found Jul 7 00:09:26.014220 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 7 00:09:26.014225 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jul 7 00:09:26.014231 kernel: Zone ranges: Jul 7 00:09:26.014237 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:09:26.014242 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 00:09:26.014247 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 7 00:09:26.014252 kernel: Movable zone start for each node Jul 7 00:09:26.014257 kernel: Early memory node ranges Jul 7 00:09:26.014262 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 7 00:09:26.014267 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 7 00:09:26.014272 kernel: node 0: [mem 0x0000000040400000-0x0000000081a4efff] Jul 7 00:09:26.014278 kernel: node 0: [mem 0x0000000081a51000-0x000000008afcdfff] Jul 7 00:09:26.014283 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Jul 7 00:09:26.014288 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 7 00:09:26.014294 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 7 00:09:26.014302 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 7 00:09:26.014308 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:09:26.014314 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 7 00:09:26.014319 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 7 00:09:26.014326 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 7 00:09:26.014331 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 7 00:09:26.014336 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Jul 7 00:09:26.014342 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 7 00:09:26.014347 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 7 00:09:26.014353 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 7 00:09:26.014358 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 7 00:09:26.014364 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 7 00:09:26.014369 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 7 00:09:26.014375 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 7 00:09:26.014381 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 7 00:09:26.014386 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 7 00:09:26.014391 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 7 00:09:26.014397 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 7 00:09:26.014402 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 7 00:09:26.014408 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 7 00:09:26.014413 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 7 00:09:26.014418 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 7 00:09:26.014425 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 7 00:09:26.014430 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 7 00:09:26.014436 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 7 00:09:26.014441 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 7 00:09:26.014447 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 7 00:09:26.014452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:09:26.014457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:09:26.014463 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:09:26.014468 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:09:26.014475 kernel: TSC deadline timer available Jul 7 00:09:26.014480 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jul 7 00:09:26.014485 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 7 00:09:26.014491 kernel: Booting paravirtualized kernel on bare hardware Jul 7 00:09:26.014496 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:09:26.014502 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 00:09:26.014507 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 7 00:09:26.014513 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 7 00:09:26.014518 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 00:09:26.014525 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:26.014531 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:09:26.014536 kernel: random: crng init done Jul 7 00:09:26.014541 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 7 00:09:26.014547 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 7 00:09:26.014552 kernel: Fallback order for Node 0: 0 Jul 7 00:09:26.014558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Jul 7 00:09:26.014563 kernel: Policy zone: Normal Jul 7 00:09:26.014570 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:09:26.014575 kernel: software IO TLB: area num 16. Jul 7 00:09:26.014581 kernel: Memory: 32720316K/33452984K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 732408K reserved, 0K cma-reserved) Jul 7 00:09:26.014586 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 00:09:26.014592 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 00:09:26.014597 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 00:09:26.014603 kernel: Dynamic Preempt: voluntary Jul 7 00:09:26.014608 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:09:26.014614 kernel: rcu: RCU event tracing is enabled. Jul 7 00:09:26.014621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 00:09:26.014626 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:09:26.014631 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:09:26.014637 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:09:26.014642 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:09:26.014648 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 00:09:26.014653 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 7 00:09:26.014659 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:09:26.014664 kernel: Console: colour dummy device 80x25 Jul 7 00:09:26.014670 kernel: printk: console [tty0] enabled Jul 7 00:09:26.014676 kernel: printk: console [ttyS1] enabled Jul 7 00:09:26.014681 kernel: ACPI: Core revision 20230628 Jul 7 00:09:26.014687 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 7 00:09:26.014692 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:09:26.014698 kernel: DMAR: Host address width 39 Jul 7 00:09:26.014703 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 7 00:09:26.014709 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 7 00:09:26.014714 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Jul 7 00:09:26.014720 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 7 00:09:26.014726 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 7 00:09:26.014731 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 7 00:09:26.014737 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 7 00:09:26.014742 kernel: x2apic enabled Jul 7 00:09:26.014748 kernel: APIC: Switched APIC routing to: cluster x2apic Jul 7 00:09:26.014753 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 7 00:09:26.014759 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 7 00:09:26.014764 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 7 00:09:26.014770 kernel: process: using mwait in idle threads Jul 7 00:09:26.014776 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 00:09:26.014781 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 00:09:26.014787 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:09:26.014792 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 7 00:09:26.014798 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 7 00:09:26.014803 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 7 00:09:26.014809 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 7 00:09:26.014814 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 7 00:09:26.014820 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:09:26.014825 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:09:26.014830 kernel: TAA: Mitigation: TSX disabled Jul 7 00:09:26.014837 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 7 00:09:26.014842 kernel: SRBDS: Mitigation: Microcode Jul 7 00:09:26.014847 kernel: GDS: Mitigation: Microcode Jul 7 00:09:26.014853 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:09:26.014858 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:09:26.014863 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:09:26.014869 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:09:26.014874 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 00:09:26.014880 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 00:09:26.014885 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:09:26.014890 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 00:09:26.014897 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 00:09:26.014902 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 7 00:09:26.014908 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:09:26.014913 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:09:26.014918 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 00:09:26.014924 kernel: landlock: Up and running. Jul 7 00:09:26.014929 kernel: SELinux: Initializing. Jul 7 00:09:26.014935 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:09:26.014940 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:09:26.014945 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 7 00:09:26.014951 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:09:26.014957 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:09:26.014963 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:09:26.014968 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 7 00:09:26.014974 kernel: ... version: 4 Jul 7 00:09:26.014979 kernel: ... bit width: 48 Jul 7 00:09:26.014985 kernel: ... generic registers: 4 Jul 7 00:09:26.014990 kernel: ... value mask: 0000ffffffffffff Jul 7 00:09:26.014995 kernel: ... max period: 00007fffffffffff Jul 7 00:09:26.015001 kernel: ... fixed-purpose events: 3 Jul 7 00:09:26.015007 kernel: ... event mask: 000000070000000f Jul 7 00:09:26.015013 kernel: signal: max sigframe size: 2032 Jul 7 00:09:26.015018 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 7 00:09:26.015023 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:09:26.015029 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:09:26.015034 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 7 00:09:26.015040 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:09:26.015045 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:09:26.015051 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jul 7 00:09:26.015057 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 00:09:26.015063 kernel: smp: Brought up 1 node, 16 CPUs Jul 7 00:09:26.015068 kernel: smpboot: Max logical packages: 1 Jul 7 00:09:26.015074 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 7 00:09:26.015079 kernel: devtmpfs: initialized Jul 7 00:09:26.015085 kernel: x86/mm: Memory block size: 128MB Jul 7 00:09:26.015090 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a4f000-0x81a4ffff] (4096 bytes) Jul 7 00:09:26.015096 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Jul 7 00:09:26.015101 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:09:26.015108 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 00:09:26.015113 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:09:26.015118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:09:26.015124 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:09:26.015131 kernel: audit: type=2000 audit(1751846960.039:1): state=initialized audit_enabled=0 res=1 Jul 7 00:09:26.015136 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:09:26.015142 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:09:26.015147 kernel: cpuidle: using governor menu Jul 7 00:09:26.015154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:09:26.015159 kernel: dca service started, version 1.12.1 Jul 7 00:09:26.015165 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 7 00:09:26.015170 kernel: PCI: Using configuration type 1 for base access Jul 7 00:09:26.015175 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 7 00:09:26.015181 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:09:26.015186 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:09:26.015192 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:09:26.015197 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:09:26.015203 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:09:26.015209 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:09:26.015214 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:09:26.015220 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:09:26.015225 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 7 00:09:26.015231 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015236 kernel: ACPI: SSDT 0xFFFF985FC1AF2C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 7 00:09:26.015242 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015247 kernel: ACPI: SSDT 0xFFFF985FC1AED800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 7 00:09:26.015253 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015259 kernel: ACPI: SSDT 0xFFFF985FC0247E00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 7 00:09:26.015264 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015270 kernel: ACPI: SSDT 0xFFFF985FC1E5C000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 7 00:09:26.015275 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015280 kernel: ACPI: SSDT 0xFFFF985FC012D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 7 00:09:26.015286 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:09:26.015291 kernel: ACPI: SSDT 0xFFFF985FC1AF1400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 7 00:09:26.015297 kernel: ACPI: _OSC evaluated successfully for all CPUs Jul 7 00:09:26.015302 kernel: ACPI: Interpreter enabled Jul 7 00:09:26.015308 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:09:26.015314 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:09:26.015319 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 7 00:09:26.015325 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 7 00:09:26.015330 kernel: HEST: Table parsing has been initialized. Jul 7 00:09:26.015335 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 7 00:09:26.015341 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:09:26.015346 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 00:09:26.015351 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 7 00:09:26.015358 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jul 7 00:09:26.015364 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jul 7 00:09:26.015369 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jul 7 00:09:26.015374 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jul 7 00:09:26.015380 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jul 7 00:09:26.015385 kernel: ACPI: \_TZ_.FN00: New power resource Jul 7 00:09:26.015391 kernel: ACPI: \_TZ_.FN01: New power resource Jul 7 00:09:26.015396 kernel: ACPI: \_TZ_.FN02: New power resource Jul 7 00:09:26.015402 kernel: ACPI: \_TZ_.FN03: New power resource Jul 7 00:09:26.015408 kernel: ACPI: \_TZ_.FN04: New power resource Jul 7 00:09:26.015414 kernel: ACPI: \PIN_: New power resource Jul 7 00:09:26.015419 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 7 00:09:26.015494 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:09:26.015549 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 7 00:09:26.015600 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 7 00:09:26.015608 kernel: PCI host bridge to bus 0000:00 Jul 7 00:09:26.015662 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:09:26.015709 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:09:26.015752 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:09:26.015796 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 7 00:09:26.015839 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 7 00:09:26.015883 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 7 00:09:26.015942 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jul 7 00:09:26.016005 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jul 7 00:09:26.016056 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.016111 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jul 7 00:09:26.016164 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jul 7 00:09:26.016218 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jul 7 00:09:26.016267 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jul 7 00:09:26.016324 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jul 7 00:09:26.016374 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jul 7 00:09:26.016423 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 7 00:09:26.016476 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jul 7 00:09:26.016525 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jul 7 00:09:26.016575 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jul 7 00:09:26.016630 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jul 7 00:09:26.016680 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 7 00:09:26.016736 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jul 7 00:09:26.016787 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 7 00:09:26.016839 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jul 7 00:09:26.016889 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jul 7 00:09:26.016940 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 7 00:09:26.016993 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jul 7 00:09:26.017042 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jul 7 00:09:26.017101 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 7 00:09:26.017157 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jul 7 00:09:26.017208 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jul 7 00:09:26.017257 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 7 00:09:26.017312 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jul 7 00:09:26.017362 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jul 7 00:09:26.017411 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jul 7 00:09:26.017461 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jul 7 00:09:26.017509 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jul 7 00:09:26.017558 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jul 7 00:09:26.017606 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jul 7 00:09:26.017658 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 7 00:09:26.017712 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jul 7 00:09:26.017765 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.017824 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jul 7 00:09:26.017877 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.017930 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jul 7 00:09:26.017983 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.018037 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jul 7 00:09:26.018087 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.018145 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jul 7 00:09:26.018239 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.018294 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jul 7 00:09:26.018343 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 7 00:09:26.018397 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jul 7 00:09:26.018452 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jul 7 00:09:26.018503 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jul 7 00:09:26.018555 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jul 7 00:09:26.018610 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jul 7 00:09:26.018659 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jul 7 00:09:26.018717 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jul 7 00:09:26.018770 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jul 7 00:09:26.018821 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jul 7 00:09:26.018873 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 7 00:09:26.018926 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 7 00:09:26.018978 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 7 00:09:26.019033 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jul 7 00:09:26.019086 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jul 7 00:09:26.019140 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jul 7 00:09:26.019191 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 7 00:09:26.019243 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 7 00:09:26.019296 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 7 00:09:26.019347 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:09:26.019396 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 7 00:09:26.019447 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:09:26.019497 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 7 00:09:26.019553 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 7 00:09:26.019604 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jul 7 00:09:26.019659 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jul 7 00:09:26.019711 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jul 7 00:09:26.019762 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jul 7 00:09:26.019813 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.019864 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 7 00:09:26.019914 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 7 00:09:26.019964 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 7 00:09:26.020022 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 7 00:09:26.020073 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jul 7 00:09:26.020127 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jul 7 00:09:26.020179 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jul 7 00:09:26.020231 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jul 7 00:09:26.020283 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 7 00:09:26.020333 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 7 00:09:26.020384 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 7 00:09:26.020436 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 7 00:09:26.020487 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 7 00:09:26.020542 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jul 7 00:09:26.020595 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 7 00:09:26.020645 kernel: pci 0000:06:00.0: supports D1 D2 Jul 7 00:09:26.020696 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 00:09:26.020746 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 7 00:09:26.020799 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 7 00:09:26.020849 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:09:26.020905 kernel: pci_bus 0000:07: extended config space not accessible Jul 7 00:09:26.020963 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jul 7 00:09:26.021017 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jul 7 00:09:26.021072 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jul 7 00:09:26.021127 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jul 7 00:09:26.021183 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:09:26.021236 kernel: pci 0000:07:00.0: supports D1 D2 Jul 7 00:09:26.021288 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 00:09:26.021340 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 7 00:09:26.021391 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 7 00:09:26.021442 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:09:26.021451 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 7 00:09:26.021459 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 7 00:09:26.021465 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 7 00:09:26.021471 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 7 00:09:26.021477 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 7 00:09:26.021483 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 7 00:09:26.021489 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 7 00:09:26.021494 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 7 00:09:26.021500 kernel: iommu: Default domain type: Translated Jul 7 00:09:26.021506 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:09:26.021512 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:09:26.021518 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:09:26.021524 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 7 00:09:26.021530 kernel: e820: reserve RAM buffer [mem 0x81a4f000-0x83ffffff] Jul 7 00:09:26.021535 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Jul 7 00:09:26.021541 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Jul 7 00:09:26.021546 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 7 00:09:26.021552 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 7 00:09:26.021604 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 7 00:09:26.021656 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 7 00:09:26.021712 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:09:26.021721 kernel: vgaarb: loaded Jul 7 00:09:26.021727 kernel: clocksource: Switched to clocksource tsc-early Jul 7 00:09:26.021733 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:09:26.021739 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:09:26.021744 kernel: pnp: PnP ACPI init Jul 7 00:09:26.021796 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 7 00:09:26.021847 kernel: pnp 00:02: [dma 0 disabled] Jul 7 00:09:26.021899 kernel: pnp 00:03: [dma 0 disabled] Jul 7 00:09:26.021952 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 7 00:09:26.021997 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 7 00:09:26.022047 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Jul 7 00:09:26.022092 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Jul 7 00:09:26.022142 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Jul 7 00:09:26.022191 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Jul 7 00:09:26.022237 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 7 00:09:26.022285 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 7 00:09:26.022331 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 7 00:09:26.022377 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 7 00:09:26.022427 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Jul 7 00:09:26.022475 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 7 00:09:26.022522 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 7 00:09:26.022568 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 7 00:09:26.022613 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 7 00:09:26.022658 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 7 00:09:26.022704 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Jul 7 00:09:26.022753 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Jul 7 00:09:26.022762 kernel: pnp: PnP ACPI: found 9 devices Jul 7 00:09:26.022770 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:09:26.022776 kernel: NET: Registered PF_INET protocol family Jul 7 00:09:26.022782 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:09:26.022787 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 7 00:09:26.022793 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:09:26.022799 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:09:26.022805 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:09:26.022811 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 7 00:09:26.022816 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:09:26.022823 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:09:26.022829 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:09:26.022835 kernel: NET: Registered PF_XDP protocol family Jul 7 00:09:26.022885 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jul 7 00:09:26.022936 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jul 7 00:09:26.022987 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jul 7 00:09:26.023040 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 7 00:09:26.023091 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 7 00:09:26.023148 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 7 00:09:26.023200 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 7 00:09:26.023250 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:09:26.023300 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 7 00:09:26.023348 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:09:26.023399 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 7 00:09:26.023450 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 7 00:09:26.023500 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 7 00:09:26.023549 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 7 00:09:26.023599 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 7 00:09:26.023650 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 7 00:09:26.023699 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 7 00:09:26.023749 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 7 00:09:26.023801 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 7 00:09:26.023853 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 7 00:09:26.023904 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:09:26.023955 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 7 00:09:26.024005 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 7 00:09:26.024055 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:09:26.024101 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 00:09:26.024149 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:09:26.024196 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:09:26.024241 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:09:26.024284 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 7 00:09:26.024329 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 7 00:09:26.024378 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 7 00:09:26.024425 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:09:26.024475 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 7 00:09:26.024524 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 7 00:09:26.024577 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 7 00:09:26.024623 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 7 00:09:26.024673 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 7 00:09:26.024719 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 7 00:09:26.024767 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 7 00:09:26.024814 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 7 00:09:26.024824 kernel: PCI: CLS 64 bytes, default 64 Jul 7 00:09:26.024830 kernel: DMAR: No ATSR found Jul 7 00:09:26.024836 kernel: DMAR: No SATC found Jul 7 00:09:26.024842 kernel: DMAR: dmar0: Using Queued invalidation Jul 7 00:09:26.024892 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 7 00:09:26.024942 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 7 00:09:26.024994 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 7 00:09:26.025044 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 7 00:09:26.025097 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 7 00:09:26.025149 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 7 00:09:26.025199 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 7 00:09:26.025249 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 7 00:09:26.025298 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 7 00:09:26.025348 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 7 00:09:26.025397 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 7 00:09:26.025447 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 7 00:09:26.025499 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 7 00:09:26.025549 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 7 00:09:26.025598 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 7 00:09:26.025648 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 7 00:09:26.025697 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 7 00:09:26.025746 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 7 00:09:26.025796 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 7 00:09:26.025846 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 7 00:09:26.025898 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 7 00:09:26.025950 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 7 00:09:26.026001 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 7 00:09:26.026053 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 7 00:09:26.026104 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 7 00:09:26.026186 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 7 00:09:26.026261 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 7 00:09:26.026269 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 7 00:09:26.026275 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 00:09:26.026283 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Jul 7 00:09:26.026289 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 7 00:09:26.026295 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 7 00:09:26.026301 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 7 00:09:26.026306 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 7 00:09:26.026358 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 7 00:09:26.026367 kernel: Initialise system trusted keyrings Jul 7 00:09:26.026373 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 7 00:09:26.026381 kernel: Key type asymmetric registered Jul 7 00:09:26.026386 kernel: Asymmetric key parser 'x509' registered Jul 7 00:09:26.026392 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 00:09:26.026398 kernel: io scheduler mq-deadline registered Jul 7 00:09:26.026403 kernel: io scheduler kyber registered Jul 7 00:09:26.026409 kernel: io scheduler bfq registered Jul 7 00:09:26.026459 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 7 00:09:26.026508 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 7 00:09:26.026559 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 7 00:09:26.026611 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 7 00:09:26.026662 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 7 00:09:26.026712 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 7 00:09:26.026768 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 7 00:09:26.026777 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 7 00:09:26.026783 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 7 00:09:26.026789 kernel: pstore: Using crash dump compression: deflate Jul 7 00:09:26.026797 kernel: pstore: Registered erst as persistent store backend Jul 7 00:09:26.026802 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:09:26.026808 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:09:26.026814 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:09:26.026820 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 00:09:26.026826 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 7 00:09:26.026879 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 7 00:09:26.026888 kernel: i8042: PNP: No PS/2 controller found. Jul 7 00:09:26.026933 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 7 00:09:26.026983 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 7 00:09:26.027028 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-07-07T00:09:24 UTC (1751846964) Jul 7 00:09:26.027075 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 7 00:09:26.027083 kernel: intel_pstate: Intel P-state driver initializing Jul 7 00:09:26.027089 kernel: intel_pstate: Disabling energy efficiency optimization Jul 7 00:09:26.027095 kernel: intel_pstate: HWP enabled Jul 7 00:09:26.027101 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jul 7 00:09:26.027107 kernel: vesafb: scrolling: redraw Jul 7 00:09:26.027114 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jul 7 00:09:26.027120 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000001123de9b, using 768k, total 768k Jul 7 00:09:26.027129 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 00:09:26.027134 kernel: fb0: VESA VGA frame buffer device Jul 7 00:09:26.027140 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:09:26.027146 kernel: Segment Routing with IPv6 Jul 7 00:09:26.027152 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:09:26.027158 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:09:26.027163 kernel: Key type dns_resolver registered Jul 7 00:09:26.027170 kernel: microcode: Current revision: 0x00000102 Jul 7 00:09:26.027176 kernel: microcode: Microcode Update Driver: v2.2. Jul 7 00:09:26.027182 kernel: IPI shorthand broadcast: enabled Jul 7 00:09:26.027188 kernel: sched_clock: Marking stable (1561000636, 1379331553)->(4400963659, -1460631470) Jul 7 00:09:26.027193 kernel: registered taskstats version 1 Jul 7 00:09:26.027199 kernel: Loading compiled-in X.509 certificates Jul 7 00:09:26.027205 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 00:09:26.027210 kernel: Key type .fscrypt registered Jul 7 00:09:26.027216 kernel: Key type fscrypt-provisioning registered Jul 7 00:09:26.027223 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:09:26.027229 kernel: ima: No architecture policies found Jul 7 00:09:26.027234 kernel: clk: Disabling unused clocks Jul 7 00:09:26.027240 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 00:09:26.027246 kernel: Write protecting the kernel read-only data: 36864k Jul 7 00:09:26.027252 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 00:09:26.027257 kernel: Run /init as init process Jul 7 00:09:26.027263 kernel: with arguments: Jul 7 00:09:26.027269 kernel: /init Jul 7 00:09:26.027275 kernel: with environment: Jul 7 00:09:26.027281 kernel: HOME=/ Jul 7 00:09:26.027286 kernel: TERM=linux Jul 7 00:09:26.027292 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:09:26.027299 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:09:26.027306 systemd[1]: Detected architecture x86-64. Jul 7 00:09:26.027312 systemd[1]: Running in initrd. Jul 7 00:09:26.027319 systemd[1]: No hostname configured, using default hostname. Jul 7 00:09:26.027325 systemd[1]: Hostname set to . Jul 7 00:09:26.027331 systemd[1]: Initializing machine ID from random generator. Jul 7 00:09:26.027337 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:09:26.027343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:26.027349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:26.027355 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:09:26.027361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:09:26.027368 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:09:26.027374 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:09:26.027381 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:09:26.027387 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:09:26.027393 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jul 7 00:09:26.027399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:26.027405 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jul 7 00:09:26.027412 kernel: clocksource: Switched to clocksource tsc Jul 7 00:09:26.027418 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:26.027424 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:09:26.027430 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:09:26.027436 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:09:26.027442 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:09:26.027448 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:09:26.027454 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:09:26.027460 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:09:26.027467 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:09:26.027473 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:26.027479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:26.027485 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:26.027491 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:09:26.027497 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:09:26.027503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:09:26.027509 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:09:26.027516 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:09:26.027522 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:09:26.027538 systemd-journald[267]: Collecting audit messages is disabled. Jul 7 00:09:26.027552 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:09:26.027560 systemd-journald[267]: Journal started Jul 7 00:09:26.027574 systemd-journald[267]: Runtime Journal (/run/log/journal/4a4a0d013488432583207b0b378b27ee) is 8.0M, max 639.9M, 631.9M free. Jul 7 00:09:26.041934 systemd-modules-load[269]: Inserted module 'overlay' Jul 7 00:09:26.071256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:26.117173 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:09:26.138159 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:09:26.138180 kernel: Bridge firewalling registered Jul 7 00:09:26.154265 systemd-modules-load[269]: Inserted module 'br_netfilter' Jul 7 00:09:26.165648 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:09:26.174569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:26.174678 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:09:26.174779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:26.192484 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:09:26.254479 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:09:26.256944 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:09:26.282750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:26.315551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:26.335469 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:09:26.356509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:26.393417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:26.405306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:09:26.405884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:09:26.411447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:26.419421 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:26.427311 systemd-resolved[296]: Positive Trust Anchors: Jul 7 00:09:26.427317 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:09:26.427349 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:09:26.429509 systemd-resolved[296]: Defaulting to hostname 'linux'. Jul 7 00:09:26.430344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:09:26.451360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:26.481688 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:09:26.589021 dracut-cmdline[311]: dracut-dracut-053 Jul 7 00:09:26.596352 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:09:26.775137 kernel: SCSI subsystem initialized Jul 7 00:09:26.799131 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:09:26.822132 kernel: iscsi: registered transport (tcp) Jul 7 00:09:26.855913 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:09:26.855931 kernel: QLogic iSCSI HBA Driver Jul 7 00:09:26.888768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:09:26.916393 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:09:26.974908 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:09:26.974931 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:09:26.994832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 00:09:27.053197 kernel: raid6: avx2x4 gen() 53189 MB/s Jul 7 00:09:27.085158 kernel: raid6: avx2x2 gen() 53777 MB/s Jul 7 00:09:27.121705 kernel: raid6: avx2x1 gen() 45146 MB/s Jul 7 00:09:27.121724 kernel: raid6: using algorithm avx2x2 gen() 53777 MB/s Jul 7 00:09:27.169785 kernel: raid6: .... xor() 31735 MB/s, rmw enabled Jul 7 00:09:27.169802 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:09:27.211163 kernel: xor: automatically using best checksumming function avx Jul 7 00:09:27.325162 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:09:27.330755 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:09:27.353431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:27.360360 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jul 7 00:09:27.364368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:27.400343 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:09:27.466761 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 7 00:09:27.537737 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:09:27.557553 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:09:27.646924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:27.661133 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:09:27.661169 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 00:09:27.687133 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 00:09:27.717549 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:09:27.753282 kernel: PTP clock support registered Jul 7 00:09:27.753299 kernel: libata version 3.00 loaded. Jul 7 00:09:27.753308 kernel: ACPI: bus type USB registered Jul 7 00:09:27.753320 kernel: AVX2 version of gcm_enc/dec engaged. Jul 7 00:09:27.753328 kernel: usbcore: registered new interface driver usbfs Jul 7 00:09:27.728105 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:09:27.823907 kernel: usbcore: registered new interface driver hub Jul 7 00:09:27.823926 kernel: usbcore: registered new device driver usb Jul 7 00:09:27.823935 kernel: AES CTR mode by8 optimization enabled Jul 7 00:09:27.810308 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:09:27.834284 kernel: ahci 0000:00:17.0: version 3.0 Jul 7 00:09:27.834167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:27.874046 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jul 7 00:09:27.874139 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 7 00:09:27.871170 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:09:27.955202 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jul 7 00:09:27.955310 kernel: scsi host0: ahci Jul 7 00:09:27.955381 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 7 00:09:27.955450 kernel: scsi host1: ahci Jul 7 00:09:27.955518 kernel: scsi host2: ahci Jul 7 00:09:27.899162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:09:27.978871 kernel: scsi host3: ahci Jul 7 00:09:27.978958 kernel: scsi host4: ahci Jul 7 00:09:27.899204 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:28.006242 kernel: scsi host5: ahci Jul 7 00:09:27.992725 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:28.134006 kernel: scsi host6: ahci Jul 7 00:09:28.134136 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jul 7 00:09:28.134150 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jul 7 00:09:28.134160 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jul 7 00:09:28.134167 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jul 7 00:09:28.134174 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jul 7 00:09:28.134183 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jul 7 00:09:28.134191 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jul 7 00:09:28.155706 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 7 00:09:28.155796 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 7 00:09:28.190596 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 7 00:09:28.190689 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 7 00:09:28.190781 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 7 00:09:28.203052 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jul 7 00:09:28.203145 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 7 00:09:28.223947 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 7 00:09:28.223965 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 7 00:09:28.224045 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 7 00:09:28.270185 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 7 00:09:28.270277 kernel: hub 1-0:1.0: USB hub found Jul 7 00:09:28.271264 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:09:28.699219 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 00:09:28.699313 kernel: hub 1-0:1.0: 16 ports detected Jul 7 00:09:28.699384 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:d4 Jul 7 00:09:28.699452 kernel: hub 2-0:1.0: USB hub found Jul 7 00:09:28.699526 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 7 00:09:28.699593 kernel: hub 2-0:1.0: 10 ports detected Jul 7 00:09:28.699654 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 7 00:09:28.699718 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.699727 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 7 00:09:28.699794 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.699802 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 00:09:28.699865 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.699875 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:d5 Jul 7 00:09:28.699939 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.699947 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 7 00:09:28.700009 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 7 00:09:28.700017 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 7 00:09:28.700079 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 00:09:28.700149 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jul 7 00:09:28.700217 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 7 00:09:28.700282 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 7 00:09:28.700290 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 7 00:09:28.700362 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 7 00:09:28.700370 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:09:28.700378 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 7 00:09:28.700385 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 7 00:09:28.700393 kernel: hub 1-14:1.0: USB hub found Jul 7 00:09:28.682245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:28.882264 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 7 00:09:28.882275 kernel: hub 1-14:1.0: 4 ports detected Jul 7 00:09:28.882354 kernel: ata1.00: Features: NCQ-prio Jul 7 00:09:28.882363 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 7 00:09:28.882432 kernel: ata2.00: Features: NCQ-prio Jul 7 00:09:28.882440 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 7 00:09:28.882508 kernel: ata1.00: configured for UDMA/133 Jul 7 00:09:28.882516 kernel: ata2.00: configured for UDMA/133 Jul 7 00:09:28.882523 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 7 00:09:28.882592 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 7 00:09:28.682282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:28.915184 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 7 00:09:28.915285 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 00:09:28.864320 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:29.303979 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.303997 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 7 00:09:29.304131 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jul 7 00:09:29.304203 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 7 00:09:29.304271 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 7 00:09:29.304333 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 00:09:29.304395 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 7 00:09:29.304456 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:09:29.304517 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jul 7 00:09:29.304578 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.304586 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 7 00:09:29.304659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:09:29.304668 kernel: GPT:9289727 != 937703087 Jul 7 00:09:29.304675 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:09:29.304683 kernel: GPT:9289727 != 937703087 Jul 7 00:09:29.304690 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:09:29.304696 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:29.304704 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 00:09:29.304765 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 7 00:09:29.304872 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jul 7 00:09:29.304937 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 7 00:09:29.304997 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:09:29.305059 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 00:09:29.305130 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jul 7 00:09:29.305199 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 00:09:29.305208 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jul 7 00:09:28.936287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:29.304113 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:09:29.327135 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:09:29.327156 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Jul 7 00:09:29.351338 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jul 7 00:09:29.372679 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (690) Jul 7 00:09:29.372693 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (688) Jul 7 00:09:29.376238 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jul 7 00:09:29.412736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jul 7 00:09:29.526275 kernel: usbcore: registered new interface driver usbhid Jul 7 00:09:29.526314 kernel: usbhid: USB HID core driver Jul 7 00:09:29.526334 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 7 00:09:29.526358 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Jul 7 00:09:29.526556 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 7 00:09:29.502630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:29.576868 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 7 00:09:29.576882 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 7 00:09:29.546191 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jul 7 00:09:29.611944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jul 7 00:09:29.652415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:09:29.695215 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.695228 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:29.669633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:09:29.733205 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.733217 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:29.733225 disk-uuid[715]: Primary Header is updated. Jul 7 00:09:29.733225 disk-uuid[715]: Secondary Entries is updated. Jul 7 00:09:29.733225 disk-uuid[715]: Secondary Header is updated. Jul 7 00:09:29.770770 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:29.770779 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:29.777311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:30.750493 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:09:30.769853 disk-uuid[716]: The operation has completed successfully. Jul 7 00:09:30.778268 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:09:30.803104 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:09:30.803224 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:09:30.837390 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:09:30.875262 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 7 00:09:30.875328 sh[744]: Success Jul 7 00:09:30.907400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:09:30.918129 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:09:30.949199 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:09:30.989439 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 00:09:30.989458 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:31.010946 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 00:09:31.030050 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 00:09:31.048354 kernel: BTRFS info (device dm-0): using free space tree Jul 7 00:09:31.085175 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 00:09:31.087203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:09:31.087529 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:09:31.101441 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:09:31.102860 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:09:31.262284 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:31.262298 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:31.262306 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:31.262313 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:31.262324 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:31.262331 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:31.232787 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 00:09:31.271504 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:09:31.291319 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:09:31.317736 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:09:31.348257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:09:31.359798 systemd-networkd[927]: lo: Link UP Jul 7 00:09:31.357653 ignition[839]: Ignition 2.19.0 Jul 7 00:09:31.359800 unknown[839]: fetched base config from "system" Jul 7 00:09:31.357658 ignition[839]: Stage: fetch-offline Jul 7 00:09:31.359801 systemd-networkd[927]: lo: Gained carrier Jul 7 00:09:31.357683 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:31.359804 unknown[839]: fetched user config from "system" Jul 7 00:09:31.357688 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:31.360635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:09:31.357746 ignition[839]: parsed url from cmdline: "" Jul 7 00:09:31.362486 systemd-networkd[927]: Enumeration completed Jul 7 00:09:31.357748 ignition[839]: no config URL provided Jul 7 00:09:31.363370 systemd-networkd[927]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:31.357751 ignition[839]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:09:31.380443 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:09:31.357774 ignition[839]: parsing config with SHA512: 59190e8749afca63946c530e424e90ea340c33f058b416720346109b8c81e5cd10b0ea4b663d581cf8dcb2995bd9d263e0c6ec5135eb83f33ba8efa9ab0be806 Jul 7 00:09:31.391226 systemd-networkd[927]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:31.360018 ignition[839]: fetch-offline: fetch-offline passed Jul 7 00:09:31.399580 systemd[1]: Reached target network.target - Network. Jul 7 00:09:31.360021 ignition[839]: POST message to Packet Timeline Jul 7 00:09:31.419467 systemd-networkd[927]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:31.360024 ignition[839]: POST Status error: resource requires networking Jul 7 00:09:31.420376 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:09:31.360059 ignition[839]: Ignition finished successfully Jul 7 00:09:31.430369 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:09:31.448714 ignition[939]: Ignition 2.19.0 Jul 7 00:09:31.448724 ignition[939]: Stage: kargs Jul 7 00:09:31.647240 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 7 00:09:31.643651 systemd-networkd[927]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:09:31.448952 ignition[939]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:31.448967 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:31.450223 ignition[939]: kargs: kargs passed Jul 7 00:09:31.450229 ignition[939]: POST message to Packet Timeline Jul 7 00:09:31.450247 ignition[939]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:31.451180 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53659->[::1]:53: read: connection refused Jul 7 00:09:31.652211 ignition[939]: GET https://metadata.packet.net/metadata: attempt #2 Jul 7 00:09:31.653229 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55532->[::1]:53: read: connection refused Jul 7 00:09:31.929142 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 7 00:09:31.930391 systemd-networkd[927]: eno1: Link UP Jul 7 00:09:31.930690 systemd-networkd[927]: eno2: Link UP Jul 7 00:09:31.930826 systemd-networkd[927]: enp1s0f0np0: Link UP Jul 7 00:09:31.930982 systemd-networkd[927]: enp1s0f0np0: Gained carrier Jul 7 00:09:31.939373 systemd-networkd[927]: enp1s0f1np1: Link UP Jul 7 00:09:31.971287 systemd-networkd[927]: enp1s0f0np0: DHCPv4 address 147.28.180.255/31, gateway 147.28.180.254 acquired from 145.40.83.140 Jul 7 00:09:32.054298 ignition[939]: GET https://metadata.packet.net/metadata: attempt #3 Jul 7 00:09:32.055454 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34378->[::1]:53: read: connection refused Jul 7 00:09:32.650864 systemd-networkd[927]: enp1s0f1np1: Gained carrier Jul 7 00:09:32.855947 ignition[939]: GET https://metadata.packet.net/metadata: attempt #4 Jul 7 00:09:32.857275 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43765->[::1]:53: read: connection refused Jul 7 00:09:33.098739 systemd-networkd[927]: enp1s0f0np0: Gained IPv6LL Jul 7 00:09:34.122731 systemd-networkd[927]: enp1s0f1np1: Gained IPv6LL Jul 7 00:09:34.458305 ignition[939]: GET https://metadata.packet.net/metadata: attempt #5 Jul 7 00:09:34.459480 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46429->[::1]:53: read: connection refused Jul 7 00:09:37.662167 ignition[939]: GET https://metadata.packet.net/metadata: attempt #6 Jul 7 00:09:38.766889 ignition[939]: GET result: OK Jul 7 00:09:39.233912 ignition[939]: Ignition finished successfully Jul 7 00:09:39.239970 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:09:39.265376 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:09:39.271508 ignition[959]: Ignition 2.19.0 Jul 7 00:09:39.271512 ignition[959]: Stage: disks Jul 7 00:09:39.271619 ignition[959]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:39.271626 ignition[959]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:39.272141 ignition[959]: disks: disks passed Jul 7 00:09:39.272144 ignition[959]: POST message to Packet Timeline Jul 7 00:09:39.272152 ignition[959]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:40.341569 ignition[959]: GET result: OK Jul 7 00:09:40.677548 ignition[959]: Ignition finished successfully Jul 7 00:09:40.679369 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:09:40.695336 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:09:40.713378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:09:40.734373 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:09:40.756548 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:09:40.777533 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:09:40.807652 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:09:40.834895 systemd-fsck[975]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 00:09:40.846275 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:09:40.879360 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:09:40.979174 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 00:09:40.979152 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:09:40.987553 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:09:41.019337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:09:41.027663 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:09:41.159242 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (984) Jul 7 00:09:41.159258 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:41.159266 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:41.159336 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:41.159346 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:41.159354 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:41.049051 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:09:41.112243 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 7 00:09:41.170404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:09:41.230411 coreos-metadata[986]: Jul 07 00:09:41.219 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:09:41.170582 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:09:41.273220 coreos-metadata[1000]: Jul 07 00:09:41.219 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:09:41.195793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:09:41.212427 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:09:41.303255 initrd-setup-root[1016]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:09:41.248352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:09:41.323250 initrd-setup-root[1023]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:09:41.333256 initrd-setup-root[1030]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:09:41.343403 initrd-setup-root[1037]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:09:41.338768 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:09:41.367292 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:09:41.409339 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:41.390094 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:09:41.418996 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:09:41.435630 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:09:41.452286 ignition[1104]: INFO : Ignition 2.19.0 Jul 7 00:09:41.452286 ignition[1104]: INFO : Stage: mount Jul 7 00:09:41.452286 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:41.452286 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:41.452286 ignition[1104]: INFO : mount: mount passed Jul 7 00:09:41.452286 ignition[1104]: INFO : POST message to Packet Timeline Jul 7 00:09:41.452286 ignition[1104]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:42.281857 coreos-metadata[1000]: Jul 07 00:09:42.281 INFO Fetch successful Jul 7 00:09:42.364699 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 7 00:09:42.364760 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 7 00:09:42.402100 ignition[1104]: INFO : GET result: OK Jul 7 00:09:42.707910 coreos-metadata[986]: Jul 07 00:09:42.707 INFO Fetch successful Jul 7 00:09:42.773013 ignition[1104]: INFO : Ignition finished successfully Jul 7 00:09:42.774007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:09:42.792553 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:09:42.802223 coreos-metadata[986]: Jul 07 00:09:42.786 INFO wrote hostname ci-4081.3.4-a-fd0ee851f3 to /sysroot/etc/hostname Jul 7 00:09:42.809442 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:09:42.862443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:09:42.902231 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1130) Jul 7 00:09:42.902250 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:09:42.922980 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:09:42.941382 kernel: BTRFS info (device sda6): using free space tree Jul 7 00:09:42.980391 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 7 00:09:42.980408 kernel: BTRFS info (device sda6): auto enabling async discard Jul 7 00:09:42.994355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:09:43.021696 ignition[1147]: INFO : Ignition 2.19.0 Jul 7 00:09:43.021696 ignition[1147]: INFO : Stage: files Jul 7 00:09:43.037368 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:43.037368 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:43.037368 ignition[1147]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:09:43.037368 ignition[1147]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:09:43.037368 ignition[1147]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:09:43.037368 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:09:43.025139 unknown[1147]: wrote ssh authorized keys file for user: core Jul 7 00:09:43.175373 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:09:43.246773 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:09:43.246773 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:43.278289 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:09:44.037755 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:09:44.355735 ignition[1147]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:09:44.355735 ignition[1147]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:09:44.385384 ignition[1147]: INFO : files: files passed Jul 7 00:09:44.385384 ignition[1147]: INFO : POST message to Packet Timeline Jul 7 00:09:44.385384 ignition[1147]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:46.187166 ignition[1147]: INFO : GET result: OK Jul 7 00:09:46.656253 ignition[1147]: INFO : Ignition finished successfully Jul 7 00:09:46.659426 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:09:46.693435 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:09:46.693864 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:09:46.723644 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:09:46.723735 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:09:46.769004 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:46.769004 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:46.807376 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:09:46.773561 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:09:46.784449 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:09:46.830359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:09:46.891469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:09:46.891521 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:09:46.910608 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:09:46.921461 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:09:46.948413 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:09:46.962583 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:09:47.034426 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:09:47.057565 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:09:47.115716 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:47.127755 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:47.148817 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:09:47.166731 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:09:47.167157 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:09:47.195962 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:09:47.216751 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:09:47.234726 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:09:47.253862 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:09:47.274757 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:09:47.295754 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:09:47.315743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:09:47.336778 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:09:47.357872 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:09:47.377744 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:09:47.395632 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:09:47.396029 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:09:47.422849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:47.443772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:47.464628 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:09:47.465048 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:47.486634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:09:47.487033 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:09:47.517730 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:09:47.518204 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:09:47.537956 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:09:47.555615 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:09:47.556045 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:47.576748 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:09:47.594709 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:09:47.612834 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:09:47.613165 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:09:47.632735 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:09:47.633036 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:09:47.656937 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:09:47.657374 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:09:47.676826 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:09:47.788289 ignition[1211]: INFO : Ignition 2.19.0 Jul 7 00:09:47.788289 ignition[1211]: INFO : Stage: umount Jul 7 00:09:47.788289 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:09:47.788289 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:09:47.788289 ignition[1211]: INFO : umount: umount passed Jul 7 00:09:47.788289 ignition[1211]: INFO : POST message to Packet Timeline Jul 7 00:09:47.788289 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:09:47.677228 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:09:47.694831 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:09:47.695237 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:09:47.724385 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:09:47.748340 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:09:47.748480 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:47.786401 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:09:47.796286 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:09:47.796559 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:47.820416 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:09:47.820495 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:09:47.872216 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:09:47.872577 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:09:47.872630 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:09:47.888220 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:09:47.888274 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:09:48.753352 ignition[1211]: INFO : GET result: OK Jul 7 00:09:49.149418 ignition[1211]: INFO : Ignition finished successfully Jul 7 00:09:49.152396 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:09:49.152698 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:09:49.171552 systemd[1]: Stopped target network.target - Network. Jul 7 00:09:49.186393 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:09:49.186658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:09:49.204553 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:09:49.204704 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:09:49.223655 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:09:49.223819 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:09:49.231887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:09:49.232052 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:09:49.259622 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:09:49.259795 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:09:49.268324 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:09:49.277211 systemd-networkd[927]: enp1s0f0np0: DHCPv6 lease lost Jul 7 00:09:49.286371 systemd-networkd[927]: enp1s0f1np1: DHCPv6 lease lost Jul 7 00:09:49.295718 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:09:49.315372 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:09:49.315742 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:09:49.335690 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:09:49.336058 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:09:49.356274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:09:49.356388 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:49.392366 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:09:49.412285 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:09:49.412328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:09:49.432431 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:09:49.432523 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:49.450525 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:09:49.450691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:49.470520 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:09:49.470690 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:49.489763 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:49.512470 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:09:49.512848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:49.545243 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:09:49.545392 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:49.549660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:09:49.549765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:49.577431 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:09:49.577593 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:09:49.607725 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:09:49.607894 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:09:49.637518 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:09:49.637660 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:09:49.688247 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:09:49.707207 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:09:49.707250 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:49.730312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:09:49.958362 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Jul 7 00:09:49.730393 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:49.750345 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:09:49.750584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:09:49.820394 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:09:49.820675 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:09:49.843371 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:09:49.879660 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:09:49.899186 systemd[1]: Switching root. Jul 7 00:09:50.031391 systemd-journald[267]: Journal stopped Jul 7 00:09:52.651765 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:09:52.651779 kernel: SELinux: policy capability open_perms=1 Jul 7 00:09:52.651786 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:09:52.651793 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:09:52.651798 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:09:52.651803 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:09:52.651809 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:09:52.651814 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:09:52.651820 kernel: audit: type=1403 audit(1751846990.274:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:09:52.651826 systemd[1]: Successfully loaded SELinux policy in 159.423ms. Jul 7 00:09:52.651835 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.542ms. Jul 7 00:09:52.651841 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:09:52.651847 systemd[1]: Detected architecture x86-64. Jul 7 00:09:52.651853 systemd[1]: Detected first boot. Jul 7 00:09:52.651860 systemd[1]: Hostname set to . Jul 7 00:09:52.651868 systemd[1]: Initializing machine ID from random generator. Jul 7 00:09:52.651874 zram_generator::config[1263]: No configuration found. Jul 7 00:09:52.651881 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:09:52.651887 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:09:52.651893 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:09:52.651899 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:09:52.651906 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:09:52.651913 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:09:52.651919 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:09:52.651926 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:09:52.651932 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:09:52.651939 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:09:52.651945 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:09:52.651951 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:09:52.651959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:09:52.651965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:09:52.651972 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:09:52.651980 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:09:52.651987 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:09:52.651993 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:09:52.652000 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jul 7 00:09:52.652006 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:09:52.652013 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:09:52.652020 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:09:52.652026 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:09:52.652034 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:09:52.652041 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:09:52.652048 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:09:52.652055 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:09:52.652062 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:09:52.652069 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:09:52.652075 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:09:52.652082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:09:52.652088 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:09:52.652095 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:09:52.652103 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:09:52.652109 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:09:52.652116 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:09:52.652123 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:09:52.652133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:52.652140 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:09:52.652146 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:09:52.652154 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:09:52.652161 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:09:52.652168 systemd[1]: Reached target machines.target - Containers. Jul 7 00:09:52.652175 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:09:52.652182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:52.652189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:09:52.652195 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:09:52.652202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:52.652209 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:09:52.652217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:52.652224 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:09:52.652230 kernel: ACPI: bus type drm_connector registered Jul 7 00:09:52.652236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:52.652243 kernel: fuse: init (API version 7.39) Jul 7 00:09:52.652249 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:09:52.652256 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:09:52.652263 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:09:52.652271 kernel: loop: module loaded Jul 7 00:09:52.652278 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:09:52.652284 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:09:52.652291 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:09:52.652305 systemd-journald[1367]: Collecting audit messages is disabled. Jul 7 00:09:52.652321 systemd-journald[1367]: Journal started Jul 7 00:09:52.652335 systemd-journald[1367]: Runtime Journal (/run/log/journal/af2d0c4bcd7944ca91b6fdd77710ab19) is 8.0M, max 639.9M, 631.9M free. Jul 7 00:09:50.793747 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:09:50.809348 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 00:09:50.809577 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:09:52.680172 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:09:52.714148 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:09:52.748135 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:09:52.789268 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:09:52.789288 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:09:52.802200 systemd[1]: Stopped verity-setup.service. Jul 7 00:09:52.870196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:52.891337 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:09:52.900750 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:09:52.910409 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:09:52.920398 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:09:52.930400 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:09:52.940338 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:09:52.950362 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:09:52.960477 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:09:52.971526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:09:52.982618 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:09:52.982799 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:09:52.995255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:52.995654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:53.007061 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:09:53.007475 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:09:53.018274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:53.018681 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:53.031080 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:09:53.031510 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:09:53.042071 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:53.042491 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:53.053101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:09:53.065057 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:09:53.078036 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:09:53.090099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:09:53.126179 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:09:53.157448 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:09:53.170292 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:09:53.180396 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:09:53.180490 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:09:53.193330 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 00:09:53.214548 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:09:53.237066 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:09:53.247419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:53.249068 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:09:53.259507 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:09:53.271251 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:09:53.271860 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:09:53.275834 systemd-journald[1367]: Time spent on flushing to /var/log/journal/af2d0c4bcd7944ca91b6fdd77710ab19 is 13.973ms for 1366 entries. Jul 7 00:09:53.275834 systemd-journald[1367]: System Journal (/var/log/journal/af2d0c4bcd7944ca91b6fdd77710ab19) is 8.0M, max 195.6M, 187.6M free. Jul 7 00:09:53.322974 systemd-journald[1367]: Received client request to flush runtime journal. Jul 7 00:09:53.290305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:09:53.290976 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:09:53.298426 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:09:53.308093 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:09:53.320223 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 00:09:53.346134 kernel: loop0: detected capacity change from 0 to 142488 Jul 7 00:09:53.346688 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:09:53.374371 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:09:53.385131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:09:53.395366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:09:53.406373 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:09:53.417372 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:09:53.428351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:09:53.445342 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:09:53.453173 kernel: loop1: detected capacity change from 0 to 140768 Jul 7 00:09:53.465096 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:09:53.485415 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 00:09:53.497909 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:09:53.513819 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:09:53.514258 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 00:09:53.527169 kernel: loop2: detected capacity change from 0 to 224512 Jul 7 00:09:53.537670 udevadm[1403]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 00:09:53.571177 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Jul 7 00:09:53.571188 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Jul 7 00:09:53.573518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:09:53.607270 ldconfig[1393]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:09:53.608508 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:09:53.625164 kernel: loop3: detected capacity change from 0 to 8 Jul 7 00:09:53.673315 kernel: loop4: detected capacity change from 0 to 142488 Jul 7 00:09:53.700132 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:09:53.710188 kernel: loop5: detected capacity change from 0 to 140768 Jul 7 00:09:53.742131 kernel: loop6: detected capacity change from 0 to 224512 Jul 7 00:09:53.746285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:09:53.758225 systemd-udevd[1425]: Using default interface naming scheme 'v255'. Jul 7 00:09:53.771453 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:09:53.773133 kernel: loop7: detected capacity change from 0 to 8 Jul 7 00:09:53.773555 (sd-merge)[1423]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 7 00:09:53.773919 (sd-merge)[1423]: Merged extensions into '/usr'. Jul 7 00:09:53.792381 systemd[1]: Reloading requested from client PID 1398 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:09:53.792389 systemd[1]: Reloading... Jul 7 00:09:53.801164 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1481) Jul 7 00:09:53.801487 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 7 00:09:53.834137 zram_generator::config[1533]: No configuration found. Jul 7 00:09:53.834201 kernel: ACPI: button: Sleep Button [SLPB] Jul 7 00:09:53.869139 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:09:53.894143 kernel: IPMI message handler: version 39.2 Jul 7 00:09:53.894189 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:09:53.894201 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:09:53.928436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:09:53.964511 kernel: ipmi device interface Jul 7 00:09:54.011564 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 7 00:09:54.011738 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 7 00:09:54.015015 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jul 7 00:09:54.015142 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jul 7 00:09:54.024133 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 7 00:09:54.024245 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 7 00:09:54.024331 kernel: ipmi_si: IPMI System Interface driver Jul 7 00:09:54.024342 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 7 00:09:54.024420 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 7 00:09:54.024431 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 7 00:09:54.024441 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 7 00:09:54.024522 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 7 00:09:54.024590 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 7 00:09:54.024655 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 7 00:09:54.024664 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 7 00:09:54.024674 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jul 7 00:09:54.047455 systemd[1]: Reloading finished in 254 ms. Jul 7 00:09:54.049131 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 7 00:09:54.283134 kernel: iTCO_vendor_support: vendor-support=0 Jul 7 00:09:54.319132 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jul 7 00:09:54.348118 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jul 7 00:09:54.348252 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jul 7 00:09:54.372479 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:09:54.383542 kernel: intel_rapl_common: Found RAPL domain package Jul 7 00:09:54.383567 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 7 00:09:54.383658 kernel: intel_rapl_common: Found RAPL domain core Jul 7 00:09:54.410130 kernel: intel_rapl_common: Found RAPL domain dram Jul 7 00:09:54.442132 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 7 00:09:54.447098 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 00:09:54.482306 systemd[1]: Starting ensure-sysext.service... Jul 7 00:09:54.489790 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 00:09:54.509327 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:09:54.516295 lvm[1603]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:09:54.521119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:09:54.521711 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:09:54.522283 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:09:54.552302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:09:54.560277 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:09:54.560498 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:09:54.561014 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:09:54.561196 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Jul 7 00:09:54.561234 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Jul 7 00:09:54.563158 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:09:54.563163 systemd-tmpfiles[1607]: Skipping /boot Jul 7 00:09:54.563473 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 00:09:54.564527 systemd[1]: Reloading requested from client PID 1602 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:09:54.564535 systemd[1]: Reloading... Jul 7 00:09:54.567529 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:09:54.567533 systemd-tmpfiles[1607]: Skipping /boot Jul 7 00:09:54.604131 zram_generator::config[1640]: No configuration found. Jul 7 00:09:54.658682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:09:54.712870 systemd[1]: Reloading finished in 148 ms. Jul 7 00:09:54.737363 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:09:54.748388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:09:54.762223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:09:54.785337 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:09:54.796022 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:09:54.801933 augenrules[1716]: No rules Jul 7 00:09:54.815623 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 00:09:54.826885 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:09:54.828935 lvm[1721]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:09:54.840183 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:09:54.859260 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:09:54.871072 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:09:54.880768 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:09:54.890385 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:09:54.901385 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:09:54.912428 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 00:09:54.923477 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:09:54.934829 systemd-networkd[1606]: lo: Link UP Jul 7 00:09:54.934832 systemd-networkd[1606]: lo: Gained carrier Jul 7 00:09:54.937898 systemd-networkd[1606]: bond0: netdev ready Jul 7 00:09:54.938843 systemd-networkd[1606]: Enumeration completed Jul 7 00:09:54.955426 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:09:54.961444 systemd-networkd[1606]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:7b:88.network. Jul 7 00:09:54.966377 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:09:54.977161 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:54.977285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:54.978026 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:54.982678 systemd-resolved[1723]: Positive Trust Anchors: Jul 7 00:09:54.982686 systemd-resolved[1723]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:09:54.982711 systemd-resolved[1723]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:09:54.985682 systemd-resolved[1723]: Using system hostname 'ci-4081.3.4-a-fd0ee851f3'. Jul 7 00:09:54.987839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:54.999817 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:55.009258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:55.009956 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:09:55.021889 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:09:55.031222 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:09:55.031301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:55.032200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:55.032288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:55.043473 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:55.043546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:55.054421 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:55.054492 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:55.064413 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:09:55.076627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:55.076767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:09:55.086369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:09:55.096807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:09:55.106784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:09:55.117800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:09:55.127268 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:09:55.127369 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:09:55.127441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:09:55.128209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:09:55.128283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:09:55.139490 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:09:55.139560 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:09:55.149454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:09:55.149522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:09:55.160444 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:09:55.160511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:09:55.171105 systemd[1]: Finished ensure-sysext.service. Jul 7 00:09:55.180645 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:09:55.180680 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:09:55.190292 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:09:55.233645 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:09:55.244201 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:09:55.616154 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 7 00:09:55.639154 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jul 7 00:09:55.639945 systemd-networkd[1606]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:7b:89.network. Jul 7 00:09:55.853183 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 7 00:09:55.875992 systemd-networkd[1606]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 7 00:09:55.876171 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jul 7 00:09:55.877356 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:09:55.877500 systemd-networkd[1606]: enp1s0f0np0: Link UP Jul 7 00:09:55.877794 systemd-networkd[1606]: enp1s0f0np0: Gained carrier Jul 7 00:09:55.897183 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 7 00:09:55.903496 systemd-networkd[1606]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:7b:88.network. Jul 7 00:09:55.903772 systemd-networkd[1606]: enp1s0f1np1: Link UP Jul 7 00:09:55.904065 systemd-networkd[1606]: enp1s0f1np1: Gained carrier Jul 7 00:09:55.906308 systemd[1]: Reached target network.target - Network. Jul 7 00:09:55.915204 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:09:55.921429 systemd-networkd[1606]: bond0: Link UP Jul 7 00:09:55.921791 systemd-networkd[1606]: bond0: Gained carrier Jul 7 00:09:55.922147 systemd-timesyncd[1762]: Network configuration changed, trying to establish connection. Jul 7 00:09:55.927232 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:09:55.937520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:09:55.948363 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:09:55.959760 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:09:55.969711 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:09:55.987924 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:09:56.007185 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jul 7 00:09:56.007205 kernel: bond0: active interface up! Jul 7 00:09:56.018292 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:09:56.018364 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:09:56.026161 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:09:56.034679 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:09:56.044877 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:09:56.054809 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:09:56.064578 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:09:56.073519 systemd-timesyncd[1762]: Contacted time server 23.186.168.128:123 (0.flatcar.pool.ntp.org). Jul 7 00:09:56.073541 systemd-timesyncd[1762]: Initial clock synchronization to Mon 2025-07-07 00:09:55.947270 UTC. Jul 7 00:09:56.074289 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:09:56.084426 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:09:56.092225 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:09:56.092239 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:09:56.101198 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:09:56.111871 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:09:56.121739 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:09:56.134523 coreos-metadata[1768]: Jul 07 00:09:56.134 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:09:56.144137 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 7 00:09:56.144107 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:09:56.153804 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:09:56.155003 dbus-daemon[1769]: [system] SELinux support is enabled Jul 7 00:09:56.155675 jq[1773]: false Jul 7 00:09:56.163358 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:09:56.175349 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:09:56.185143 extend-filesystems[1774]: Found loop4 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found loop5 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found loop6 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found loop7 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found sda Jul 7 00:09:56.192267 extend-filesystems[1774]: Found sda1 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found sda2 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found sda3 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found usr Jul 7 00:09:56.192267 extend-filesystems[1774]: Found sda4 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found sda6 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found sda7 Jul 7 00:09:56.192267 extend-filesystems[1774]: Found sda9 Jul 7 00:09:56.192267 extend-filesystems[1774]: Checking size of /dev/sda9 Jul 7 00:09:56.192267 extend-filesystems[1774]: Resized partition /dev/sda9 Jul 7 00:09:56.363222 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jul 7 00:09:56.363318 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1490) Jul 7 00:09:56.186068 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:09:56.363380 extend-filesystems[1788]: resize2fs 1.47.1 (20-May-2024) Jul 7 00:09:56.193016 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:09:56.234861 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:09:56.245847 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:09:56.294256 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jul 7 00:09:56.303542 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:09:56.380637 update_engine[1799]: I20250707 00:09:56.347572 1799 main.cc:92] Flatcar Update Engine starting Jul 7 00:09:56.380637 update_engine[1799]: I20250707 00:09:56.348232 1799 update_check_scheduler.cc:74] Next update check in 6m42s Jul 7 00:09:56.303939 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:09:56.380799 jq[1800]: true Jul 7 00:09:56.321559 systemd-logind[1794]: Watching system buttons on /dev/input/event3 (Power Button) Jul 7 00:09:56.321568 systemd-logind[1794]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 7 00:09:56.321578 systemd-logind[1794]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 7 00:09:56.321881 systemd-logind[1794]: New seat seat0. Jul 7 00:09:56.340686 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:09:56.355465 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:09:56.373420 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:09:56.399373 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:09:56.399469 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:09:56.399637 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:09:56.399720 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:09:56.409586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:09:56.409676 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:09:56.423057 (ntainerd)[1804]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:09:56.424472 jq[1803]: true Jul 7 00:09:56.426332 dbus-daemon[1769]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:09:56.429032 tar[1802]: linux-amd64/LICENSE Jul 7 00:09:56.429272 tar[1802]: linux-amd64/helm Jul 7 00:09:56.435545 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 7 00:09:56.435641 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jul 7 00:09:56.435732 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:09:56.436955 sshd_keygen[1797]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:09:56.451363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:09:56.451484 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:09:56.462232 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:09:56.462321 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:09:56.480323 bash[1832]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:09:56.486331 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:09:56.498061 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:09:56.507454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:09:56.523093 locksmithd[1841]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:09:56.532287 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:09:56.541049 systemd[1]: Starting sshkeys.service... Jul 7 00:09:56.548491 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:09:56.548593 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:09:56.560674 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:09:56.571911 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:09:56.583846 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:09:56.594510 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:09:56.604926 coreos-metadata[1862]: Jul 07 00:09:56.604 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:09:56.606721 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:09:56.615950 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jul 7 00:09:56.625436 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:09:56.625686 containerd[1804]: time="2025-07-07T00:09:56.625649864Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 00:09:56.638163 containerd[1804]: time="2025-07-07T00:09:56.638016184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:56.638914 containerd[1804]: time="2025-07-07T00:09:56.638895432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:56.638958 containerd[1804]: time="2025-07-07T00:09:56.638914324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 00:09:56.638958 containerd[1804]: time="2025-07-07T00:09:56.638929132Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 00:09:56.639052 containerd[1804]: time="2025-07-07T00:09:56.639041928Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 00:09:56.639083 containerd[1804]: time="2025-07-07T00:09:56.639055367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639120 containerd[1804]: time="2025-07-07T00:09:56.639104497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639169 containerd[1804]: time="2025-07-07T00:09:56.639117424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639249 containerd[1804]: time="2025-07-07T00:09:56.639237958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639288 containerd[1804]: time="2025-07-07T00:09:56.639249272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639288 containerd[1804]: time="2025-07-07T00:09:56.639262517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639288 containerd[1804]: time="2025-07-07T00:09:56.639273517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639375 containerd[1804]: time="2025-07-07T00:09:56.639336064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639638 containerd[1804]: time="2025-07-07T00:09:56.639628397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639718 containerd[1804]: time="2025-07-07T00:09:56.639703044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:09:56.639749 containerd[1804]: time="2025-07-07T00:09:56.639716068Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 00:09:56.639791 containerd[1804]: time="2025-07-07T00:09:56.639780398Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 00:09:56.639836 containerd[1804]: time="2025-07-07T00:09:56.639819903Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:09:56.651408 containerd[1804]: time="2025-07-07T00:09:56.651365652Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 00:09:56.651408 containerd[1804]: time="2025-07-07T00:09:56.651391931Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 00:09:56.651481 containerd[1804]: time="2025-07-07T00:09:56.651408890Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 00:09:56.651481 containerd[1804]: time="2025-07-07T00:09:56.651422833Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 00:09:56.651481 containerd[1804]: time="2025-07-07T00:09:56.651436774Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 00:09:56.651575 containerd[1804]: time="2025-07-07T00:09:56.651524743Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 00:09:56.651702 containerd[1804]: time="2025-07-07T00:09:56.651689049Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 00:09:56.651768 containerd[1804]: time="2025-07-07T00:09:56.651759599Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 00:09:56.651785 containerd[1804]: time="2025-07-07T00:09:56.651770755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 00:09:56.651785 containerd[1804]: time="2025-07-07T00:09:56.651778482Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 00:09:56.651817 containerd[1804]: time="2025-07-07T00:09:56.651786066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 00:09:56.651817 containerd[1804]: time="2025-07-07T00:09:56.651793981Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 00:09:56.651817 containerd[1804]: time="2025-07-07T00:09:56.651801033Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 00:09:56.651817 containerd[1804]: time="2025-07-07T00:09:56.651809354Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 00:09:56.651878 containerd[1804]: time="2025-07-07T00:09:56.651818078Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 00:09:56.651878 containerd[1804]: time="2025-07-07T00:09:56.651829049Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 00:09:56.651878 containerd[1804]: time="2025-07-07T00:09:56.651836390Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 00:09:56.651878 containerd[1804]: time="2025-07-07T00:09:56.651843071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 00:09:56.651878 containerd[1804]: time="2025-07-07T00:09:56.651854378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.651878 containerd[1804]: time="2025-07-07T00:09:56.651862167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.651878 containerd[1804]: time="2025-07-07T00:09:56.651869299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.651878 containerd[1804]: time="2025-07-07T00:09:56.651876667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651884139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651891832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651898553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651905463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651913308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651921573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651927943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651934578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651941723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651952133Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651964309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651971274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.651977353Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 00:09:56.652003 containerd[1804]: time="2025-07-07T00:09:56.652001942Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 00:09:56.652239 containerd[1804]: time="2025-07-07T00:09:56.652011281Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 00:09:56.652239 containerd[1804]: time="2025-07-07T00:09:56.652017869Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 00:09:56.652239 containerd[1804]: time="2025-07-07T00:09:56.652024504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 00:09:56.652239 containerd[1804]: time="2025-07-07T00:09:56.652029822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652239 containerd[1804]: time="2025-07-07T00:09:56.652037483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 00:09:56.652239 containerd[1804]: time="2025-07-07T00:09:56.652043403Z" level=info msg="NRI interface is disabled by configuration." Jul 7 00:09:56.652239 containerd[1804]: time="2025-07-07T00:09:56.652051254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 00:09:56.652347 containerd[1804]: time="2025-07-07T00:09:56.652219199Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 00:09:56.652347 containerd[1804]: time="2025-07-07T00:09:56.652256177Z" level=info msg="Connect containerd service" Jul 7 00:09:56.652347 containerd[1804]: time="2025-07-07T00:09:56.652287297Z" level=info msg="using legacy CRI server" Jul 7 00:09:56.652347 containerd[1804]: time="2025-07-07T00:09:56.652292288Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:09:56.652347 containerd[1804]: time="2025-07-07T00:09:56.652337132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 00:09:56.652658 containerd[1804]: time="2025-07-07T00:09:56.652647840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:09:56.652747 containerd[1804]: time="2025-07-07T00:09:56.652731291Z" level=info msg="Start subscribing containerd event" Jul 7 00:09:56.652782 containerd[1804]: time="2025-07-07T00:09:56.652758875Z" level=info msg="Start recovering state" Jul 7 00:09:56.652808 containerd[1804]: time="2025-07-07T00:09:56.652802950Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:09:56.652839 containerd[1804]: time="2025-07-07T00:09:56.652805004Z" level=info msg="Start event monitor" Jul 7 00:09:56.652839 containerd[1804]: time="2025-07-07T00:09:56.652823612Z" level=info msg="Start snapshots syncer" Jul 7 00:09:56.652839 containerd[1804]: time="2025-07-07T00:09:56.652828631Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:09:56.652906 containerd[1804]: time="2025-07-07T00:09:56.652833114Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:09:56.652906 containerd[1804]: time="2025-07-07T00:09:56.652846642Z" level=info msg="Start streaming server" Jul 7 00:09:56.652906 containerd[1804]: time="2025-07-07T00:09:56.652875889Z" level=info msg="containerd successfully booted in 0.028141s" Jul 7 00:09:56.652919 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:09:56.736746 tar[1802]: linux-amd64/README.md Jul 7 00:09:56.746130 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jul 7 00:09:56.770288 extend-filesystems[1788]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 00:09:56.770288 extend-filesystems[1788]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 7 00:09:56.770288 extend-filesystems[1788]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jul 7 00:09:56.810204 extend-filesystems[1774]: Resized filesystem in /dev/sda9 Jul 7 00:09:56.810204 extend-filesystems[1774]: Found sdb Jul 7 00:09:56.770717 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:09:56.770814 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:09:56.818405 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:09:57.290315 systemd-networkd[1606]: bond0: Gained IPv6LL Jul 7 00:09:57.291465 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:09:57.302865 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:09:57.322391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:09:57.332974 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:09:57.351872 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:09:58.044809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:09:58.064287 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:09:58.467421 kubelet[1904]: E0707 00:09:58.467294 1904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:09:58.468380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:09:58.468456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:09:59.404961 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jul 7 00:09:59.405099 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jul 7 00:09:59.442549 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:09:59.458450 systemd[1]: Started sshd@0-147.28.180.255:22-139.178.89.65:37310.service - OpenSSH per-connection server daemon (139.178.89.65:37310). Jul 7 00:09:59.500412 sshd[1922]: Accepted publickey for core from 139.178.89.65 port 37310 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:09:59.501363 sshd[1922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:09:59.506688 systemd-logind[1794]: New session 1 of user core. Jul 7 00:09:59.507490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:09:59.533372 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:09:59.546062 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:09:59.577078 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:09:59.598046 (systemd)[1929]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:09:59.718930 systemd[1929]: Queued start job for default target default.target. Jul 7 00:09:59.726660 systemd[1929]: Created slice app.slice - User Application Slice. Jul 7 00:09:59.726674 systemd[1929]: Reached target paths.target - Paths. Jul 7 00:09:59.726682 systemd[1929]: Reached target timers.target - Timers. Jul 7 00:09:59.727319 systemd[1929]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:09:59.732715 systemd[1929]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:09:59.732743 systemd[1929]: Reached target sockets.target - Sockets. Jul 7 00:09:59.732752 systemd[1929]: Reached target basic.target - Basic System. Jul 7 00:09:59.732772 systemd[1929]: Reached target default.target - Main User Target. Jul 7 00:09:59.732788 systemd[1929]: Startup finished in 118ms. Jul 7 00:09:59.732864 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:09:59.744047 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:09:59.804203 systemd[1]: Started sshd@1-147.28.180.255:22-139.178.89.65:58122.service - OpenSSH per-connection server daemon (139.178.89.65:58122). Jul 7 00:09:59.841514 sshd[1941]: Accepted publickey for core from 139.178.89.65 port 58122 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:09:59.842142 sshd[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:09:59.844587 systemd-logind[1794]: New session 2 of user core. Jul 7 00:09:59.860304 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:09:59.916504 sshd[1941]: pam_unix(sshd:session): session closed for user core Jul 7 00:09:59.933546 systemd[1]: sshd@1-147.28.180.255:22-139.178.89.65:58122.service: Deactivated successfully. Jul 7 00:09:59.934273 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:09:59.934895 systemd-logind[1794]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:09:59.935557 systemd[1]: Started sshd@2-147.28.180.255:22-139.178.89.65:58130.service - OpenSSH per-connection server daemon (139.178.89.65:58130). Jul 7 00:09:59.946888 systemd-logind[1794]: Removed session 2. Jul 7 00:09:59.971328 sshd[1948]: Accepted publickey for core from 139.178.89.65 port 58130 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:09:59.971932 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:09:59.974112 systemd-logind[1794]: New session 3 of user core. Jul 7 00:09:59.986317 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:10:00.046677 sshd[1948]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:00.049048 systemd[1]: sshd@2-147.28.180.255:22-139.178.89.65:58130.service: Deactivated successfully. Jul 7 00:10:00.050926 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:10:00.052468 systemd-logind[1794]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:10:00.053714 systemd-logind[1794]: Removed session 3. Jul 7 00:10:01.713063 login[1867]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:10:01.718647 login[1868]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:10:01.724974 systemd-logind[1794]: New session 4 of user core. Jul 7 00:10:01.741705 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:10:01.748411 systemd-logind[1794]: New session 5 of user core. Jul 7 00:10:01.772835 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:10:02.422829 coreos-metadata[1862]: Jul 07 00:10:02.422 INFO Fetch successful Jul 7 00:10:02.463083 coreos-metadata[1768]: Jul 07 00:10:02.463 INFO Fetch successful Jul 7 00:10:02.505540 unknown[1862]: wrote ssh authorized keys file for user: core Jul 7 00:10:02.539407 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:10:02.540680 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 7 00:10:02.541109 update-ssh-keys[1981]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:10:02.541503 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:10:02.542673 systemd[1]: Finished sshkeys.service. Jul 7 00:10:02.900180 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 7 00:10:02.901523 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:10:02.901943 systemd[1]: Startup finished in 1.746s (kernel) + 25.267s (initrd) + 12.786s (userspace) = 39.799s. Jul 7 00:10:08.505584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:10:08.525372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:08.780213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:08.782419 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:08.806046 kubelet[2000]: E0707 00:10:08.806020 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:08.808194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:08.808284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:10.017435 systemd[1]: Started sshd@3-147.28.180.255:22-139.178.89.65:56630.service - OpenSSH per-connection server daemon (139.178.89.65:56630). Jul 7 00:10:10.043666 sshd[2016]: Accepted publickey for core from 139.178.89.65 port 56630 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:10:10.044355 sshd[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:10.046849 systemd-logind[1794]: New session 6 of user core. Jul 7 00:10:10.047424 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:10:10.097442 sshd[2016]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:10.109747 systemd[1]: sshd@3-147.28.180.255:22-139.178.89.65:56630.service: Deactivated successfully. Jul 7 00:10:10.110530 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:10:10.111236 systemd-logind[1794]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:10:10.111869 systemd[1]: Started sshd@4-147.28.180.255:22-139.178.89.65:56632.service - OpenSSH per-connection server daemon (139.178.89.65:56632). Jul 7 00:10:10.112367 systemd-logind[1794]: Removed session 6. Jul 7 00:10:10.155442 sshd[2023]: Accepted publickey for core from 139.178.89.65 port 56632 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:10:10.156297 sshd[2023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:10.159486 systemd-logind[1794]: New session 7 of user core. Jul 7 00:10:10.175680 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:10:10.236185 sshd[2023]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:10.256145 systemd[1]: sshd@4-147.28.180.255:22-139.178.89.65:56632.service: Deactivated successfully. Jul 7 00:10:10.259736 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:10:10.263097 systemd-logind[1794]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:10:10.278900 systemd[1]: Started sshd@5-147.28.180.255:22-139.178.89.65:56642.service - OpenSSH per-connection server daemon (139.178.89.65:56642). Jul 7 00:10:10.281431 systemd-logind[1794]: Removed session 7. Jul 7 00:10:10.345072 sshd[2030]: Accepted publickey for core from 139.178.89.65 port 56642 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:10:10.345893 sshd[2030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:10.348761 systemd-logind[1794]: New session 8 of user core. Jul 7 00:10:10.359379 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:10:10.422943 sshd[2030]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:10.445983 systemd[1]: sshd@5-147.28.180.255:22-139.178.89.65:56642.service: Deactivated successfully. Jul 7 00:10:10.449592 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:10:10.452963 systemd-logind[1794]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:10:10.465990 systemd[1]: Started sshd@6-147.28.180.255:22-139.178.89.65:56644.service - OpenSSH per-connection server daemon (139.178.89.65:56644). Jul 7 00:10:10.468972 systemd-logind[1794]: Removed session 8. Jul 7 00:10:10.514353 sshd[2037]: Accepted publickey for core from 139.178.89.65 port 56644 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:10:10.514992 sshd[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:10.517523 systemd-logind[1794]: New session 9 of user core. Jul 7 00:10:10.529412 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:10:10.585386 sudo[2040]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:10:10.585534 sudo[2040]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:10:10.604955 sudo[2040]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:10.606096 sshd[2037]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:10.628265 systemd[1]: sshd@6-147.28.180.255:22-139.178.89.65:56644.service: Deactivated successfully. Jul 7 00:10:10.631923 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:10:10.635365 systemd-logind[1794]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:10:10.645492 systemd[1]: Started sshd@7-147.28.180.255:22-139.178.89.65:56648.service - OpenSSH per-connection server daemon (139.178.89.65:56648). Jul 7 00:10:10.646028 systemd-logind[1794]: Removed session 9. Jul 7 00:10:10.671991 sshd[2045]: Accepted publickey for core from 139.178.89.65 port 56648 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:10:10.672685 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:10.674933 systemd-logind[1794]: New session 10 of user core. Jul 7 00:10:10.682393 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:10:10.732157 sudo[2049]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:10:10.732412 sudo[2049]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:10:10.735609 sudo[2049]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:10.740667 sudo[2048]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 00:10:10.740983 sudo[2048]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:10:10.764788 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 00:10:10.767502 auditctl[2052]: No rules Jul 7 00:10:10.768150 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:10:10.768494 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 00:10:10.772905 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:10:10.830282 augenrules[2070]: No rules Jul 7 00:10:10.831961 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:10:10.834334 sudo[2048]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:10.837798 sshd[2045]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:10.861112 systemd[1]: sshd@7-147.28.180.255:22-139.178.89.65:56648.service: Deactivated successfully. Jul 7 00:10:10.864819 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:10:10.868377 systemd-logind[1794]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:10:10.888918 systemd[1]: Started sshd@8-147.28.180.255:22-139.178.89.65:56652.service - OpenSSH per-connection server daemon (139.178.89.65:56652). Jul 7 00:10:10.891814 systemd-logind[1794]: Removed session 10. Jul 7 00:10:10.970265 sshd[2078]: Accepted publickey for core from 139.178.89.65 port 56652 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:10:10.971663 sshd[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:10:10.975862 systemd-logind[1794]: New session 11 of user core. Jul 7 00:10:10.989625 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:10:11.050546 sudo[2082]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:10:11.050695 sudo[2082]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:10:11.420519 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:10:11.420559 (dockerd)[2110]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:10:11.667548 dockerd[2110]: time="2025-07-07T00:10:11.667494613Z" level=info msg="Starting up" Jul 7 00:10:11.734279 dockerd[2110]: time="2025-07-07T00:10:11.734184955Z" level=info msg="Loading containers: start." Jul 7 00:10:11.813139 kernel: Initializing XFRM netlink socket Jul 7 00:10:11.865770 systemd-networkd[1606]: docker0: Link UP Jul 7 00:10:11.879105 dockerd[2110]: time="2025-07-07T00:10:11.879057898Z" level=info msg="Loading containers: done." Jul 7 00:10:11.888655 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1736794614-merged.mount: Deactivated successfully. Jul 7 00:10:11.889084 dockerd[2110]: time="2025-07-07T00:10:11.889044281Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:10:11.889118 dockerd[2110]: time="2025-07-07T00:10:11.889093187Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 00:10:11.889202 dockerd[2110]: time="2025-07-07T00:10:11.889148948Z" level=info msg="Daemon has completed initialization" Jul 7 00:10:11.904312 dockerd[2110]: time="2025-07-07T00:10:11.904277842Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:10:11.904403 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:10:12.708601 containerd[1804]: time="2025-07-07T00:10:12.708464634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:10:13.283836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115944104.mount: Deactivated successfully. Jul 7 00:10:14.014335 containerd[1804]: time="2025-07-07T00:10:14.014284070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:14.014546 containerd[1804]: time="2025-07-07T00:10:14.014430282Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 7 00:10:14.014876 containerd[1804]: time="2025-07-07T00:10:14.014837918Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:14.016795 containerd[1804]: time="2025-07-07T00:10:14.016755672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:14.017269 containerd[1804]: time="2025-07-07T00:10:14.017226206Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.308673471s" Jul 7 00:10:14.017269 containerd[1804]: time="2025-07-07T00:10:14.017246540Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 00:10:14.017615 containerd[1804]: time="2025-07-07T00:10:14.017566372Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:10:15.005616 containerd[1804]: time="2025-07-07T00:10:15.005554587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.005845 containerd[1804]: time="2025-07-07T00:10:15.005796416Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 7 00:10:15.006346 containerd[1804]: time="2025-07-07T00:10:15.006305498Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.007898 containerd[1804]: time="2025-07-07T00:10:15.007858265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.008549 containerd[1804]: time="2025-07-07T00:10:15.008506519Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 990.921527ms" Jul 7 00:10:15.008549 containerd[1804]: time="2025-07-07T00:10:15.008523714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 00:10:15.008793 containerd[1804]: time="2025-07-07T00:10:15.008781944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:10:15.853878 containerd[1804]: time="2025-07-07T00:10:15.853817388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.854091 containerd[1804]: time="2025-07-07T00:10:15.853965464Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 7 00:10:15.854521 containerd[1804]: time="2025-07-07T00:10:15.854477622Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.856091 containerd[1804]: time="2025-07-07T00:10:15.856051161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:15.856736 containerd[1804]: time="2025-07-07T00:10:15.856695367Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 847.898281ms" Jul 7 00:10:15.856736 containerd[1804]: time="2025-07-07T00:10:15.856710867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 00:10:15.857059 containerd[1804]: time="2025-07-07T00:10:15.857026457Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:10:16.712897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331914996.mount: Deactivated successfully. Jul 7 00:10:16.910046 containerd[1804]: time="2025-07-07T00:10:16.910020820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:16.910253 containerd[1804]: time="2025-07-07T00:10:16.910209563Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 7 00:10:16.910577 containerd[1804]: time="2025-07-07T00:10:16.910536620Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:16.911450 containerd[1804]: time="2025-07-07T00:10:16.911432169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:16.911811 containerd[1804]: time="2025-07-07T00:10:16.911795937Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.054732633s" Jul 7 00:10:16.911851 containerd[1804]: time="2025-07-07T00:10:16.911811491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 00:10:16.912065 containerd[1804]: time="2025-07-07T00:10:16.912054632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:10:17.425002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232030548.mount: Deactivated successfully. Jul 7 00:10:18.000166 containerd[1804]: time="2025-07-07T00:10:18.000141207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:18.000375 containerd[1804]: time="2025-07-07T00:10:18.000350779Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 00:10:18.000832 containerd[1804]: time="2025-07-07T00:10:18.000821416Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:18.002506 containerd[1804]: time="2025-07-07T00:10:18.002466045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:18.003139 containerd[1804]: time="2025-07-07T00:10:18.003090522Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.091017887s" Jul 7 00:10:18.003139 containerd[1804]: time="2025-07-07T00:10:18.003113508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:10:18.003454 containerd[1804]: time="2025-07-07T00:10:18.003411807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:10:18.556405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208392064.mount: Deactivated successfully. Jul 7 00:10:18.574112 containerd[1804]: time="2025-07-07T00:10:18.574074174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:18.574445 containerd[1804]: time="2025-07-07T00:10:18.574403588Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:10:18.574846 containerd[1804]: time="2025-07-07T00:10:18.574794870Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:18.575923 containerd[1804]: time="2025-07-07T00:10:18.575890140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:18.576370 containerd[1804]: time="2025-07-07T00:10:18.576333433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 572.907096ms" Jul 7 00:10:18.576370 containerd[1804]: time="2025-07-07T00:10:18.576348165Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:10:18.576679 containerd[1804]: time="2025-07-07T00:10:18.576668320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:10:19.004314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:10:19.017455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:19.251068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:19.253318 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:10:19.273561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677269467.mount: Deactivated successfully. Jul 7 00:10:19.274903 kubelet[2416]: E0707 00:10:19.274883 2416 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:10:19.276120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:10:19.276285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:10:20.316819 containerd[1804]: time="2025-07-07T00:10:20.316764802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:20.317029 containerd[1804]: time="2025-07-07T00:10:20.316945623Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 7 00:10:20.317431 containerd[1804]: time="2025-07-07T00:10:20.317389792Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:20.319260 containerd[1804]: time="2025-07-07T00:10:20.319219815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:20.320498 containerd[1804]: time="2025-07-07T00:10:20.320457653Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.743775627s" Jul 7 00:10:20.320498 containerd[1804]: time="2025-07-07T00:10:20.320471847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 00:10:22.282719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:22.303479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:22.319750 systemd[1]: Reloading requested from client PID 2539 ('systemctl') (unit session-11.scope)... Jul 7 00:10:22.319758 systemd[1]: Reloading... Jul 7 00:10:22.360197 zram_generator::config[2578]: No configuration found. Jul 7 00:10:22.428143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:10:22.489225 systemd[1]: Reloading finished in 169 ms. Jul 7 00:10:22.539318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:22.540537 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:22.541724 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:10:22.541827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:22.542687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:22.776501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:22.779429 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:10:22.800833 kubelet[2647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:10:22.800833 kubelet[2647]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:10:22.800833 kubelet[2647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:10:22.800833 kubelet[2647]: I0707 00:10:22.800814 2647 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:10:23.170286 kubelet[2647]: I0707 00:10:23.170210 2647 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:10:23.170286 kubelet[2647]: I0707 00:10:23.170225 2647 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:10:23.170446 kubelet[2647]: I0707 00:10:23.170398 2647 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:10:23.188449 kubelet[2647]: E0707 00:10:23.188402 2647 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.180.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:23.189763 kubelet[2647]: I0707 00:10:23.189705 2647 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:10:23.195999 kubelet[2647]: E0707 00:10:23.195963 2647 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:10:23.195999 kubelet[2647]: I0707 00:10:23.195993 2647 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:10:23.204394 kubelet[2647]: I0707 00:10:23.204358 2647 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:10:23.205737 kubelet[2647]: I0707 00:10:23.205694 2647 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:10:23.205830 kubelet[2647]: I0707 00:10:23.205709 2647 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-fd0ee851f3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:10:23.205830 kubelet[2647]: I0707 00:10:23.205803 2647 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:10:23.205830 kubelet[2647]: I0707 00:10:23.205809 2647 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:10:23.205928 kubelet[2647]: I0707 00:10:23.205875 2647 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:10:23.208852 kubelet[2647]: I0707 00:10:23.208813 2647 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:10:23.208852 kubelet[2647]: I0707 00:10:23.208827 2647 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:10:23.208852 kubelet[2647]: I0707 00:10:23.208838 2647 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:10:23.208852 kubelet[2647]: I0707 00:10:23.208845 2647 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:10:23.210612 kubelet[2647]: W0707 00:10:23.210561 2647 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.180.255:6443: connect: connection refused Jul 7 00:10:23.210612 kubelet[2647]: E0707 00:10:23.210590 2647 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.180.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:23.211465 kubelet[2647]: I0707 00:10:23.211432 2647 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:10:23.211755 kubelet[2647]: I0707 00:10:23.211725 2647 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:10:23.211755 kubelet[2647]: W0707 00:10:23.211721 2647 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-fd0ee851f3&limit=500&resourceVersion=0": dial tcp 147.28.180.255:6443: connect: connection refused Jul 7 00:10:23.211801 kubelet[2647]: E0707 00:10:23.211763 2647 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.180.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-fd0ee851f3&limit=500&resourceVersion=0\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:23.211801 kubelet[2647]: W0707 00:10:23.211769 2647 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:10:23.213109 kubelet[2647]: I0707 00:10:23.213071 2647 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:10:23.213109 kubelet[2647]: I0707 00:10:23.213088 2647 server.go:1287] "Started kubelet" Jul 7 00:10:23.216747 kubelet[2647]: I0707 00:10:23.216671 2647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:10:23.216813 kubelet[2647]: I0707 00:10:23.216798 2647 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:10:23.217052 kubelet[2647]: I0707 00:10:23.217039 2647 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:10:23.219140 kubelet[2647]: E0707 00:10:23.218065 2647 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.255:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-a-fd0ee851f3.184fcf95ff70e114 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-a-fd0ee851f3,UID:ci-4081.3.4-a-fd0ee851f3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-a-fd0ee851f3,},FirstTimestamp:2025-07-07 00:10:23.213076756 +0000 UTC m=+0.431339324,LastTimestamp:2025-07-07 00:10:23.213076756 +0000 UTC m=+0.431339324,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-a-fd0ee851f3,}" Jul 7 00:10:23.219530 kubelet[2647]: I0707 00:10:23.219522 2647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:10:23.219575 kubelet[2647]: I0707 00:10:23.219567 2647 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:10:23.219608 kubelet[2647]: E0707 00:10:23.219576 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:23.219608 kubelet[2647]: I0707 00:10:23.219599 2647 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:10:23.219672 kubelet[2647]: E0707 00:10:23.219613 2647 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:10:23.219672 kubelet[2647]: I0707 00:10:23.219603 2647 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:10:23.219672 kubelet[2647]: I0707 00:10:23.219657 2647 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:10:23.219763 kubelet[2647]: I0707 00:10:23.219614 2647 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:10:23.219763 kubelet[2647]: E0707 00:10:23.219740 2647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-fd0ee851f3?timeout=10s\": dial tcp 147.28.180.255:6443: connect: connection refused" interval="200ms" Jul 7 00:10:23.219834 kubelet[2647]: W0707 00:10:23.219807 2647 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.255:6443: connect: connection refused Jul 7 00:10:23.219865 kubelet[2647]: E0707 00:10:23.219845 2647 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.180.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:23.219897 kubelet[2647]: I0707 00:10:23.219888 2647 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:10:23.219953 kubelet[2647]: I0707 00:10:23.219943 2647 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:10:23.220379 kubelet[2647]: I0707 00:10:23.220371 2647 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:10:23.228193 kubelet[2647]: I0707 00:10:23.228165 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:10:23.228716 kubelet[2647]: I0707 00:10:23.228705 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:10:23.228745 kubelet[2647]: I0707 00:10:23.228721 2647 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:10:23.228745 kubelet[2647]: I0707 00:10:23.228737 2647 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:10:23.228801 kubelet[2647]: I0707 00:10:23.228745 2647 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:10:23.228801 kubelet[2647]: E0707 00:10:23.228779 2647 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:10:23.230470 kubelet[2647]: W0707 00:10:23.230409 2647 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.255:6443: connect: connection refused Jul 7 00:10:23.230470 kubelet[2647]: E0707 00:10:23.230439 2647 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.180.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:23.286011 kubelet[2647]: I0707 00:10:23.285981 2647 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:10:23.286011 kubelet[2647]: I0707 00:10:23.286000 2647 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:10:23.286011 kubelet[2647]: I0707 00:10:23.286019 2647 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:10:23.294226 kubelet[2647]: I0707 00:10:23.294183 2647 policy_none.go:49] "None policy: Start" Jul 7 00:10:23.294226 kubelet[2647]: I0707 00:10:23.294202 2647 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:10:23.294226 kubelet[2647]: I0707 00:10:23.294214 2647 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:10:23.297520 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:10:23.316930 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:10:23.318896 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:10:23.320514 kubelet[2647]: E0707 00:10:23.320472 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:23.329852 kubelet[2647]: E0707 00:10:23.329808 2647 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:10:23.335900 kubelet[2647]: I0707 00:10:23.335857 2647 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:10:23.336011 kubelet[2647]: I0707 00:10:23.336000 2647 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:10:23.336048 kubelet[2647]: I0707 00:10:23.336011 2647 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:10:23.336202 kubelet[2647]: I0707 00:10:23.336159 2647 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:10:23.336668 kubelet[2647]: E0707 00:10:23.336626 2647 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:10:23.336668 kubelet[2647]: E0707 00:10:23.336665 2647 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:23.421249 kubelet[2647]: E0707 00:10:23.420947 2647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-fd0ee851f3?timeout=10s\": dial tcp 147.28.180.255:6443: connect: connection refused" interval="400ms" Jul 7 00:10:23.441277 kubelet[2647]: I0707 00:10:23.441171 2647 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.442070 kubelet[2647]: E0707 00:10:23.441959 2647 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.180.255:6443/api/v1/nodes\": dial tcp 147.28.180.255:6443: connect: connection refused" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.554553 systemd[1]: Created slice kubepods-burstable-pod45cc6cc7b414865027905b16e343e0d7.slice - libcontainer container kubepods-burstable-pod45cc6cc7b414865027905b16e343e0d7.slice. Jul 7 00:10:23.579030 kubelet[2647]: E0707 00:10:23.578937 2647 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.585394 systemd[1]: Created slice kubepods-burstable-pod0b1bf5b6931aa9bc95d73b3026c5a3c6.slice - libcontainer container kubepods-burstable-pod0b1bf5b6931aa9bc95d73b3026c5a3c6.slice. Jul 7 00:10:23.589685 kubelet[2647]: E0707 00:10:23.589600 2647 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.593999 systemd[1]: Created slice kubepods-burstable-podc594c98677c0ad888d629db195871c44.slice - libcontainer container kubepods-burstable-podc594c98677c0ad888d629db195871c44.slice. Jul 7 00:10:23.597755 kubelet[2647]: E0707 00:10:23.597676 2647 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.646536 kubelet[2647]: I0707 00:10:23.646442 2647 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.647350 kubelet[2647]: E0707 00:10:23.647243 2647 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.180.255:6443/api/v1/nodes\": dial tcp 147.28.180.255:6443: connect: connection refused" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.721690 kubelet[2647]: I0707 00:10:23.721445 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.721690 kubelet[2647]: I0707 00:10:23.721537 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.721690 kubelet[2647]: I0707 00:10:23.721605 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.722179 kubelet[2647]: I0707 00:10:23.721662 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.722179 kubelet[2647]: I0707 00:10:23.721776 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b1bf5b6931aa9bc95d73b3026c5a3c6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-fd0ee851f3\" (UID: \"0b1bf5b6931aa9bc95d73b3026c5a3c6\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.722179 kubelet[2647]: I0707 00:10:23.721825 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.722179 kubelet[2647]: I0707 00:10:23.721874 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c594c98677c0ad888d629db195871c44-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-fd0ee851f3\" (UID: \"c594c98677c0ad888d629db195871c44\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.722179 kubelet[2647]: I0707 00:10:23.721927 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c594c98677c0ad888d629db195871c44-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-fd0ee851f3\" (UID: \"c594c98677c0ad888d629db195871c44\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.722682 kubelet[2647]: I0707 00:10:23.721978 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c594c98677c0ad888d629db195871c44-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-fd0ee851f3\" (UID: \"c594c98677c0ad888d629db195871c44\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:23.822961 kubelet[2647]: E0707 00:10:23.822829 2647 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-a-fd0ee851f3?timeout=10s\": dial tcp 147.28.180.255:6443: connect: connection refused" interval="800ms" Jul 7 00:10:23.882042 containerd[1804]: time="2025-07-07T00:10:23.881898473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-fd0ee851f3,Uid:45cc6cc7b414865027905b16e343e0d7,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:23.890350 containerd[1804]: time="2025-07-07T00:10:23.890285247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-fd0ee851f3,Uid:0b1bf5b6931aa9bc95d73b3026c5a3c6,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:23.900994 containerd[1804]: time="2025-07-07T00:10:23.900950645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-fd0ee851f3,Uid:c594c98677c0ad888d629db195871c44,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:24.050056 kubelet[2647]: I0707 00:10:24.050040 2647 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:24.050312 kubelet[2647]: E0707 00:10:24.050267 2647 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.180.255:6443/api/v1/nodes\": dial tcp 147.28.180.255:6443: connect: connection refused" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:24.094352 kubelet[2647]: W0707 00:10:24.094257 2647 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.255:6443: connect: connection refused Jul 7 00:10:24.094352 kubelet[2647]: E0707 00:10:24.094331 2647 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.180.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.106226 kubelet[2647]: W0707 00:10:24.106173 2647 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-fd0ee851f3&limit=500&resourceVersion=0": dial tcp 147.28.180.255:6443: connect: connection refused Jul 7 00:10:24.106274 kubelet[2647]: E0707 00:10:24.106226 2647 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.180.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-a-fd0ee851f3&limit=500&resourceVersion=0\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.278942 kubelet[2647]: W0707 00:10:24.278879 2647 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.180.255:6443: connect: connection refused Jul 7 00:10:24.278942 kubelet[2647]: E0707 00:10:24.278939 2647 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.180.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.348797 kubelet[2647]: W0707 00:10:24.348689 2647 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.255:6443: connect: connection refused Jul 7 00:10:24.348797 kubelet[2647]: E0707 00:10:24.348730 2647 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.180.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.180.255:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:10:24.372182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295394121.mount: Deactivated successfully. Jul 7 00:10:24.373632 containerd[1804]: time="2025-07-07T00:10:24.373583745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:10:24.374322 containerd[1804]: time="2025-07-07T00:10:24.374274639Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:10:24.374796 containerd[1804]: time="2025-07-07T00:10:24.374758374Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:10:24.374919 containerd[1804]: time="2025-07-07T00:10:24.374862467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 7 00:10:24.375104 containerd[1804]: time="2025-07-07T00:10:24.375067873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:10:24.375406 containerd[1804]: time="2025-07-07T00:10:24.375350709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:10:24.375790 containerd[1804]: time="2025-07-07T00:10:24.375754688Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:10:24.377938 containerd[1804]: time="2025-07-07T00:10:24.377896936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:10:24.379002 containerd[1804]: time="2025-07-07T00:10:24.378844676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.707252ms" Jul 7 00:10:24.379756 containerd[1804]: time="2025-07-07T00:10:24.379719091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 489.385674ms" Jul 7 00:10:24.381120 containerd[1804]: time="2025-07-07T00:10:24.381074692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.093705ms" Jul 7 00:10:24.488095 containerd[1804]: time="2025-07-07T00:10:24.488042204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:10:24.488185 containerd[1804]: time="2025-07-07T00:10:24.488131614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:10:24.488312 containerd[1804]: time="2025-07-07T00:10:24.488287606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:10:24.488312 containerd[1804]: time="2025-07-07T00:10:24.488305029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:24.488374 containerd[1804]: time="2025-07-07T00:10:24.488356921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:10:24.488393 containerd[1804]: time="2025-07-07T00:10:24.488363709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:24.488393 containerd[1804]: time="2025-07-07T00:10:24.488373689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:24.488450 containerd[1804]: time="2025-07-07T00:10:24.488416046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:24.488648 containerd[1804]: time="2025-07-07T00:10:24.488432033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:10:24.488684 containerd[1804]: time="2025-07-07T00:10:24.488648092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:10:24.488684 containerd[1804]: time="2025-07-07T00:10:24.488656520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:24.488738 containerd[1804]: time="2025-07-07T00:10:24.488694165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:24.503429 systemd[1]: Started cri-containerd-18d8e9498608c38cb12f8b273328f70a801cbb47c3de9f2bffa80dba7dfaf769.scope - libcontainer container 18d8e9498608c38cb12f8b273328f70a801cbb47c3de9f2bffa80dba7dfaf769. Jul 7 00:10:24.504266 systemd[1]: Started cri-containerd-7d173dac8bc7ce0096a548783fb3dd6f8fea34494b018d16e78219c760bc1174.scope - libcontainer container 7d173dac8bc7ce0096a548783fb3dd6f8fea34494b018d16e78219c760bc1174. Jul 7 00:10:24.505112 systemd[1]: Started cri-containerd-e40f54395e59d2ce037981ca5d68c16ca43cfb585e5ca03530bd69a005ddd8a6.scope - libcontainer container e40f54395e59d2ce037981ca5d68c16ca43cfb585e5ca03530bd69a005ddd8a6. Jul 7 00:10:24.525478 containerd[1804]: time="2025-07-07T00:10:24.525455791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-a-fd0ee851f3,Uid:45cc6cc7b414865027905b16e343e0d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"18d8e9498608c38cb12f8b273328f70a801cbb47c3de9f2bffa80dba7dfaf769\"" Jul 7 00:10:24.526498 containerd[1804]: time="2025-07-07T00:10:24.526481588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-a-fd0ee851f3,Uid:0b1bf5b6931aa9bc95d73b3026c5a3c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d173dac8bc7ce0096a548783fb3dd6f8fea34494b018d16e78219c760bc1174\"" Jul 7 00:10:24.526722 containerd[1804]: time="2025-07-07T00:10:24.526706645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-a-fd0ee851f3,Uid:c594c98677c0ad888d629db195871c44,Namespace:kube-system,Attempt:0,} returns sandbox id \"e40f54395e59d2ce037981ca5d68c16ca43cfb585e5ca03530bd69a005ddd8a6\"" Jul 7 00:10:24.526966 containerd[1804]: time="2025-07-07T00:10:24.526952752Z" level=info msg="CreateContainer within sandbox \"18d8e9498608c38cb12f8b273328f70a801cbb47c3de9f2bffa80dba7dfaf769\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:10:24.527449 containerd[1804]: time="2025-07-07T00:10:24.527437551Z" level=info msg="CreateContainer within sandbox \"7d173dac8bc7ce0096a548783fb3dd6f8fea34494b018d16e78219c760bc1174\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:10:24.527618 containerd[1804]: time="2025-07-07T00:10:24.527608657Z" level=info msg="CreateContainer within sandbox \"e40f54395e59d2ce037981ca5d68c16ca43cfb585e5ca03530bd69a005ddd8a6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:10:24.533625 containerd[1804]: time="2025-07-07T00:10:24.533574401Z" level=info msg="CreateContainer within sandbox \"7d173dac8bc7ce0096a548783fb3dd6f8fea34494b018d16e78219c760bc1174\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ef4e9b86b22805c375626dce9faee8b88f99185098210744d9b35ba43027773\"" Jul 7 00:10:24.533832 containerd[1804]: time="2025-07-07T00:10:24.533795889Z" level=info msg="StartContainer for \"2ef4e9b86b22805c375626dce9faee8b88f99185098210744d9b35ba43027773\"" Jul 7 00:10:24.534059 containerd[1804]: time="2025-07-07T00:10:24.534046271Z" level=info msg="CreateContainer within sandbox \"18d8e9498608c38cb12f8b273328f70a801cbb47c3de9f2bffa80dba7dfaf769\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82eeb24885ff846c8e9ca89afa01f5329f03d3d014af2ae60ccaf5dfedd53f6e\"" Jul 7 00:10:24.534235 containerd[1804]: time="2025-07-07T00:10:24.534190665Z" level=info msg="StartContainer for \"82eeb24885ff846c8e9ca89afa01f5329f03d3d014af2ae60ccaf5dfedd53f6e\"" Jul 7 00:10:24.537467 containerd[1804]: time="2025-07-07T00:10:24.537445635Z" level=info msg="CreateContainer within sandbox \"e40f54395e59d2ce037981ca5d68c16ca43cfb585e5ca03530bd69a005ddd8a6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"929fc33e55166c7caa14c2d9e08f6bc615f944d486c2c4097a6282df4db68686\"" Jul 7 00:10:24.537792 containerd[1804]: time="2025-07-07T00:10:24.537778407Z" level=info msg="StartContainer for \"929fc33e55166c7caa14c2d9e08f6bc615f944d486c2c4097a6282df4db68686\"" Jul 7 00:10:24.562443 systemd[1]: Started cri-containerd-2ef4e9b86b22805c375626dce9faee8b88f99185098210744d9b35ba43027773.scope - libcontainer container 2ef4e9b86b22805c375626dce9faee8b88f99185098210744d9b35ba43027773. Jul 7 00:10:24.563107 systemd[1]: Started cri-containerd-82eeb24885ff846c8e9ca89afa01f5329f03d3d014af2ae60ccaf5dfedd53f6e.scope - libcontainer container 82eeb24885ff846c8e9ca89afa01f5329f03d3d014af2ae60ccaf5dfedd53f6e. Jul 7 00:10:24.564755 systemd[1]: Started cri-containerd-929fc33e55166c7caa14c2d9e08f6bc615f944d486c2c4097a6282df4db68686.scope - libcontainer container 929fc33e55166c7caa14c2d9e08f6bc615f944d486c2c4097a6282df4db68686. Jul 7 00:10:24.589009 containerd[1804]: time="2025-07-07T00:10:24.588975734Z" level=info msg="StartContainer for \"2ef4e9b86b22805c375626dce9faee8b88f99185098210744d9b35ba43027773\" returns successfully" Jul 7 00:10:24.589124 containerd[1804]: time="2025-07-07T00:10:24.589035205Z" level=info msg="StartContainer for \"82eeb24885ff846c8e9ca89afa01f5329f03d3d014af2ae60ccaf5dfedd53f6e\" returns successfully" Jul 7 00:10:24.591116 containerd[1804]: time="2025-07-07T00:10:24.591094205Z" level=info msg="StartContainer for \"929fc33e55166c7caa14c2d9e08f6bc615f944d486c2c4097a6282df4db68686\" returns successfully" Jul 7 00:10:24.852534 kubelet[2647]: I0707 00:10:24.852516 2647 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:25.234870 kubelet[2647]: E0707 00:10:25.234816 2647 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:25.235096 kubelet[2647]: E0707 00:10:25.235085 2647 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:25.235645 kubelet[2647]: E0707 00:10:25.235636 2647 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:25.295185 kubelet[2647]: E0707 00:10:25.293962 2647 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:25.514392 kubelet[2647]: I0707 00:10:25.514332 2647 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:25.514607 kubelet[2647]: E0707 00:10:25.514429 2647 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.4-a-fd0ee851f3\": node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:25.535304 kubelet[2647]: E0707 00:10:25.535235 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:25.636426 kubelet[2647]: E0707 00:10:25.636353 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:25.737610 kubelet[2647]: E0707 00:10:25.737527 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:25.838033 kubelet[2647]: E0707 00:10:25.837785 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:25.938739 kubelet[2647]: E0707 00:10:25.938615 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.039221 kubelet[2647]: E0707 00:10:26.039084 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.140114 kubelet[2647]: E0707 00:10:26.139868 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.240598 kubelet[2647]: E0707 00:10:26.240533 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.241782 kubelet[2647]: E0707 00:10:26.241697 2647 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:26.241996 kubelet[2647]: E0707 00:10:26.241890 2647 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:26.340710 kubelet[2647]: E0707 00:10:26.340643 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.441395 kubelet[2647]: E0707 00:10:26.441269 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.541457 kubelet[2647]: E0707 00:10:26.541360 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.642197 kubelet[2647]: E0707 00:10:26.642096 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.743244 kubelet[2647]: E0707 00:10:26.743044 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:26.844315 kubelet[2647]: E0707 00:10:26.844249 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:27.020014 kubelet[2647]: I0707 00:10:27.019910 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:27.034825 kubelet[2647]: W0707 00:10:27.034739 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:10:27.035148 kubelet[2647]: I0707 00:10:27.035054 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:27.041400 kubelet[2647]: W0707 00:10:27.041316 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:10:27.041564 kubelet[2647]: I0707 00:10:27.041463 2647 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:27.047741 kubelet[2647]: W0707 00:10:27.047644 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:10:27.209820 kubelet[2647]: I0707 00:10:27.209708 2647 apiserver.go:52] "Watching apiserver" Jul 7 00:10:27.220506 kubelet[2647]: I0707 00:10:27.220417 2647 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:10:27.727949 systemd[1]: Reloading requested from client PID 2964 ('systemctl') (unit session-11.scope)... Jul 7 00:10:27.727956 systemd[1]: Reloading... Jul 7 00:10:27.769206 zram_generator::config[3003]: No configuration found. Jul 7 00:10:27.844957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:10:27.913038 systemd[1]: Reloading finished in 184 ms. Jul 7 00:10:27.946557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:27.950895 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:10:27.950999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:27.961378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:10:28.197934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:10:28.200160 (kubelet)[3067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:10:28.221037 kubelet[3067]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:10:28.221037 kubelet[3067]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:10:28.221037 kubelet[3067]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:10:28.221289 kubelet[3067]: I0707 00:10:28.221079 3067 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:10:28.224430 kubelet[3067]: I0707 00:10:28.224392 3067 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:10:28.224430 kubelet[3067]: I0707 00:10:28.224402 3067 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:10:28.224550 kubelet[3067]: I0707 00:10:28.224520 3067 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:10:28.225195 kubelet[3067]: I0707 00:10:28.225151 3067 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:10:28.238543 kubelet[3067]: I0707 00:10:28.238481 3067 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:10:28.249214 kubelet[3067]: E0707 00:10:28.249115 3067 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:10:28.249416 kubelet[3067]: I0707 00:10:28.249222 3067 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:10:28.276114 kubelet[3067]: I0707 00:10:28.276028 3067 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:10:28.276695 kubelet[3067]: I0707 00:10:28.276578 3067 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:10:28.277156 kubelet[3067]: I0707 00:10:28.276649 3067 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-a-fd0ee851f3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:10:28.277156 kubelet[3067]: I0707 00:10:28.277144 3067 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:10:28.277779 kubelet[3067]: I0707 00:10:28.277195 3067 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:10:28.277779 kubelet[3067]: I0707 00:10:28.277351 3067 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:10:28.278119 kubelet[3067]: I0707 00:10:28.277883 3067 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:10:28.278119 kubelet[3067]: I0707 00:10:28.277938 3067 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:10:28.278119 kubelet[3067]: I0707 00:10:28.277985 3067 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:10:28.278119 kubelet[3067]: I0707 00:10:28.278026 3067 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:10:28.279597 kubelet[3067]: I0707 00:10:28.279518 3067 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:10:28.280519 kubelet[3067]: I0707 00:10:28.280488 3067 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:10:28.281400 kubelet[3067]: I0707 00:10:28.281338 3067 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:10:28.281536 kubelet[3067]: I0707 00:10:28.281498 3067 server.go:1287] "Started kubelet" Jul 7 00:10:28.281821 kubelet[3067]: I0707 00:10:28.281708 3067 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:10:28.281987 kubelet[3067]: I0707 00:10:28.281725 3067 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:10:28.282403 kubelet[3067]: I0707 00:10:28.282358 3067 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:10:28.286584 kubelet[3067]: I0707 00:10:28.286554 3067 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:10:28.286740 kubelet[3067]: I0707 00:10:28.286651 3067 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:10:28.286740 kubelet[3067]: I0707 00:10:28.286717 3067 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:10:28.286926 kubelet[3067]: E0707 00:10:28.286756 3067 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-a-fd0ee851f3\" not found" Jul 7 00:10:28.286926 kubelet[3067]: E0707 00:10:28.286775 3067 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:10:28.286926 kubelet[3067]: I0707 00:10:28.286806 3067 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:10:28.287225 kubelet[3067]: I0707 00:10:28.287060 3067 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:10:28.287324 kubelet[3067]: I0707 00:10:28.287219 3067 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:10:28.288093 kubelet[3067]: I0707 00:10:28.287981 3067 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:10:28.289060 kubelet[3067]: I0707 00:10:28.289041 3067 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:10:28.289060 kubelet[3067]: I0707 00:10:28.289061 3067 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:10:28.297559 kubelet[3067]: I0707 00:10:28.297529 3067 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:10:28.298504 kubelet[3067]: I0707 00:10:28.298484 3067 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:10:28.298573 kubelet[3067]: I0707 00:10:28.298515 3067 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:10:28.298573 kubelet[3067]: I0707 00:10:28.298538 3067 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:10:28.298573 kubelet[3067]: I0707 00:10:28.298549 3067 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:10:28.298667 kubelet[3067]: E0707 00:10:28.298601 3067 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:10:28.311913 kubelet[3067]: I0707 00:10:28.311864 3067 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:10:28.311913 kubelet[3067]: I0707 00:10:28.311875 3067 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:10:28.311913 kubelet[3067]: I0707 00:10:28.311887 3067 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:10:28.312036 kubelet[3067]: I0707 00:10:28.311994 3067 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:10:28.312036 kubelet[3067]: I0707 00:10:28.312002 3067 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:10:28.312036 kubelet[3067]: I0707 00:10:28.312016 3067 policy_none.go:49] "None policy: Start" Jul 7 00:10:28.312036 kubelet[3067]: I0707 00:10:28.312021 3067 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:10:28.312036 kubelet[3067]: I0707 00:10:28.312028 3067 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:10:28.312165 kubelet[3067]: I0707 00:10:28.312102 3067 state_mem.go:75] "Updated machine memory state" Jul 7 00:10:28.315367 kubelet[3067]: I0707 00:10:28.315325 3067 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:10:28.315489 kubelet[3067]: I0707 00:10:28.315437 3067 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:10:28.315489 kubelet[3067]: I0707 00:10:28.315445 3067 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:10:28.315593 kubelet[3067]: I0707 00:10:28.315581 3067 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:10:28.315987 kubelet[3067]: E0707 00:10:28.315972 3067 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:10:28.400631 kubelet[3067]: I0707 00:10:28.400525 3067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.400962 kubelet[3067]: I0707 00:10:28.400670 3067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.400962 kubelet[3067]: I0707 00:10:28.400525 3067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.408172 kubelet[3067]: W0707 00:10:28.408097 3067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:10:28.408368 kubelet[3067]: E0707 00:10:28.408264 3067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-a-fd0ee851f3\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.408598 kubelet[3067]: W0707 00:10:28.408363 3067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:10:28.408598 kubelet[3067]: E0707 00:10:28.408534 3067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-a-fd0ee851f3\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.408880 kubelet[3067]: W0707 00:10:28.408688 3067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:10:28.408880 kubelet[3067]: E0707 00:10:28.408784 3067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.423024 kubelet[3067]: I0707 00:10:28.422971 3067 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.431806 kubelet[3067]: I0707 00:10:28.431713 3067 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.432002 kubelet[3067]: I0707 00:10:28.431857 3067 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.588442 kubelet[3067]: I0707 00:10:28.588315 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c594c98677c0ad888d629db195871c44-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-a-fd0ee851f3\" (UID: \"c594c98677c0ad888d629db195871c44\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.588442 kubelet[3067]: I0707 00:10:28.588415 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c594c98677c0ad888d629db195871c44-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-a-fd0ee851f3\" (UID: \"c594c98677c0ad888d629db195871c44\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.588827 kubelet[3067]: I0707 00:10:28.588481 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.588827 kubelet[3067]: I0707 00:10:28.588540 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.588827 kubelet[3067]: I0707 00:10:28.588602 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b1bf5b6931aa9bc95d73b3026c5a3c6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-a-fd0ee851f3\" (UID: \"0b1bf5b6931aa9bc95d73b3026c5a3c6\") " pod="kube-system/kube-scheduler-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.588827 kubelet[3067]: I0707 00:10:28.588657 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c594c98677c0ad888d629db195871c44-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-a-fd0ee851f3\" (UID: \"c594c98677c0ad888d629db195871c44\") " pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.588827 kubelet[3067]: I0707 00:10:28.588753 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.589476 kubelet[3067]: I0707 00:10:28.588805 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:28.589476 kubelet[3067]: I0707 00:10:28.588907 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6cc7b414865027905b16e343e0d7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" (UID: \"45cc6cc7b414865027905b16e343e0d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:29.279222 kubelet[3067]: I0707 00:10:29.279206 3067 apiserver.go:52] "Watching apiserver" Jul 7 00:10:29.287904 kubelet[3067]: I0707 00:10:29.287861 3067 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:10:29.303396 kubelet[3067]: I0707 00:10:29.303380 3067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:29.303495 kubelet[3067]: I0707 00:10:29.303486 3067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:29.307029 kubelet[3067]: W0707 00:10:29.307016 3067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:10:29.307084 kubelet[3067]: E0707 00:10:29.307048 3067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-a-fd0ee851f3\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:29.307251 kubelet[3067]: W0707 00:10:29.307243 3067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:10:29.307278 kubelet[3067]: E0707 00:10:29.307265 3067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-a-fd0ee851f3\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:29.342483 kubelet[3067]: I0707 00:10:29.342448 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-a-fd0ee851f3" podStartSLOduration=2.342437334 podStartE2EDuration="2.342437334s" podCreationTimestamp="2025-07-07 00:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:29.337457078 +0000 UTC m=+1.135493324" watchObservedRunningTime="2025-07-07 00:10:29.342437334 +0000 UTC m=+1.140473578" Jul 7 00:10:29.346942 kubelet[3067]: I0707 00:10:29.346916 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-a-fd0ee851f3" podStartSLOduration=2.346906667 podStartE2EDuration="2.346906667s" podCreationTimestamp="2025-07-07 00:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:29.342521274 +0000 UTC m=+1.140557521" watchObservedRunningTime="2025-07-07 00:10:29.346906667 +0000 UTC m=+1.144942918" Jul 7 00:10:29.347027 kubelet[3067]: I0707 00:10:29.346975 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-a-fd0ee851f3" podStartSLOduration=2.346970725 podStartE2EDuration="2.346970725s" podCreationTimestamp="2025-07-07 00:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:29.346813796 +0000 UTC m=+1.144850058" watchObservedRunningTime="2025-07-07 00:10:29.346970725 +0000 UTC m=+1.145006975" Jul 7 00:10:32.655551 kubelet[3067]: I0707 00:10:32.655467 3067 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:10:32.656445 containerd[1804]: time="2025-07-07T00:10:32.656196724Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:10:32.657060 kubelet[3067]: I0707 00:10:32.656624 3067 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:10:33.207546 systemd[1]: Created slice kubepods-besteffort-pod4384a66d_9811_4e14_9c7c_73b293cb2080.slice - libcontainer container kubepods-besteffort-pod4384a66d_9811_4e14_9c7c_73b293cb2080.slice. Jul 7 00:10:33.222782 kubelet[3067]: I0707 00:10:33.222742 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4384a66d-9811-4e14-9c7c-73b293cb2080-xtables-lock\") pod \"kube-proxy-dd9sf\" (UID: \"4384a66d-9811-4e14-9c7c-73b293cb2080\") " pod="kube-system/kube-proxy-dd9sf" Jul 7 00:10:33.222782 kubelet[3067]: I0707 00:10:33.222782 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4384a66d-9811-4e14-9c7c-73b293cb2080-lib-modules\") pod \"kube-proxy-dd9sf\" (UID: \"4384a66d-9811-4e14-9c7c-73b293cb2080\") " pod="kube-system/kube-proxy-dd9sf" Jul 7 00:10:33.222998 kubelet[3067]: I0707 00:10:33.222802 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwf6r\" (UniqueName: \"kubernetes.io/projected/4384a66d-9811-4e14-9c7c-73b293cb2080-kube-api-access-wwf6r\") pod \"kube-proxy-dd9sf\" (UID: \"4384a66d-9811-4e14-9c7c-73b293cb2080\") " pod="kube-system/kube-proxy-dd9sf" Jul 7 00:10:33.222998 kubelet[3067]: I0707 00:10:33.222828 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4384a66d-9811-4e14-9c7c-73b293cb2080-kube-proxy\") pod \"kube-proxy-dd9sf\" (UID: \"4384a66d-9811-4e14-9c7c-73b293cb2080\") " pod="kube-system/kube-proxy-dd9sf" Jul 7 00:10:33.335802 kubelet[3067]: E0707 00:10:33.335698 3067 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 00:10:33.335802 kubelet[3067]: E0707 00:10:33.335760 3067 projected.go:194] Error preparing data for projected volume kube-api-access-wwf6r for pod kube-system/kube-proxy-dd9sf: configmap "kube-root-ca.crt" not found Jul 7 00:10:33.336179 kubelet[3067]: E0707 00:10:33.335894 3067 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4384a66d-9811-4e14-9c7c-73b293cb2080-kube-api-access-wwf6r podName:4384a66d-9811-4e14-9c7c-73b293cb2080 nodeName:}" failed. No retries permitted until 2025-07-07 00:10:33.835845534 +0000 UTC m=+5.633881847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwf6r" (UniqueName: "kubernetes.io/projected/4384a66d-9811-4e14-9c7c-73b293cb2080-kube-api-access-wwf6r") pod "kube-proxy-dd9sf" (UID: "4384a66d-9811-4e14-9c7c-73b293cb2080") : configmap "kube-root-ca.crt" not found Jul 7 00:10:33.730667 systemd[1]: Created slice kubepods-besteffort-pod9683e6b0_ce1d_45ac_b33e_1bc1e17f5492.slice - libcontainer container kubepods-besteffort-pod9683e6b0_ce1d_45ac_b33e_1bc1e17f5492.slice. Jul 7 00:10:33.827656 kubelet[3067]: I0707 00:10:33.827529 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9683e6b0-ce1d-45ac-b33e-1bc1e17f5492-var-lib-calico\") pod \"tigera-operator-747864d56d-6pn9j\" (UID: \"9683e6b0-ce1d-45ac-b33e-1bc1e17f5492\") " pod="tigera-operator/tigera-operator-747864d56d-6pn9j" Jul 7 00:10:33.828475 kubelet[3067]: I0707 00:10:33.827670 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg28q\" (UniqueName: \"kubernetes.io/projected/9683e6b0-ce1d-45ac-b33e-1bc1e17f5492-kube-api-access-vg28q\") pod \"tigera-operator-747864d56d-6pn9j\" (UID: \"9683e6b0-ce1d-45ac-b33e-1bc1e17f5492\") " pod="tigera-operator/tigera-operator-747864d56d-6pn9j" Jul 7 00:10:34.035952 containerd[1804]: time="2025-07-07T00:10:34.035873223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6pn9j,Uid:9683e6b0-ce1d-45ac-b33e-1bc1e17f5492,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:10:34.129516 containerd[1804]: time="2025-07-07T00:10:34.129467656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dd9sf,Uid:4384a66d-9811-4e14-9c7c-73b293cb2080,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:34.423075 containerd[1804]: time="2025-07-07T00:10:34.422972852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:10:34.423075 containerd[1804]: time="2025-07-07T00:10:34.423002776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:10:34.423075 containerd[1804]: time="2025-07-07T00:10:34.423009987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:34.423075 containerd[1804]: time="2025-07-07T00:10:34.423054053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:34.436313 systemd[1]: Started cri-containerd-59544dd389eb4a1004d89f1b0cac44229dc9328342fda39d9ecc5f6a3246abe2.scope - libcontainer container 59544dd389eb4a1004d89f1b0cac44229dc9328342fda39d9ecc5f6a3246abe2. Jul 7 00:10:34.508077 containerd[1804]: time="2025-07-07T00:10:34.508049021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6pn9j,Uid:9683e6b0-ce1d-45ac-b33e-1bc1e17f5492,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"59544dd389eb4a1004d89f1b0cac44229dc9328342fda39d9ecc5f6a3246abe2\"" Jul 7 00:10:34.509096 containerd[1804]: time="2025-07-07T00:10:34.509077480Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:10:34.532477 containerd[1804]: time="2025-07-07T00:10:34.532412393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:10:34.532477 containerd[1804]: time="2025-07-07T00:10:34.532440748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:10:34.532477 containerd[1804]: time="2025-07-07T00:10:34.532450605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:34.532587 containerd[1804]: time="2025-07-07T00:10:34.532495673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:34.553385 systemd[1]: Started cri-containerd-50176deb8e9eac6ede845afcc36530e6b418889b82ebd4183f2d66db78c7a039.scope - libcontainer container 50176deb8e9eac6ede845afcc36530e6b418889b82ebd4183f2d66db78c7a039. Jul 7 00:10:34.566765 containerd[1804]: time="2025-07-07T00:10:34.566733308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dd9sf,Uid:4384a66d-9811-4e14-9c7c-73b293cb2080,Namespace:kube-system,Attempt:0,} returns sandbox id \"50176deb8e9eac6ede845afcc36530e6b418889b82ebd4183f2d66db78c7a039\"" Jul 7 00:10:34.568486 containerd[1804]: time="2025-07-07T00:10:34.568461254Z" level=info msg="CreateContainer within sandbox \"50176deb8e9eac6ede845afcc36530e6b418889b82ebd4183f2d66db78c7a039\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:10:34.575070 containerd[1804]: time="2025-07-07T00:10:34.575054758Z" level=info msg="CreateContainer within sandbox \"50176deb8e9eac6ede845afcc36530e6b418889b82ebd4183f2d66db78c7a039\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"887bf4675d37b6187caf19d58847d8453d8e24966f8b46c68822899cd5e272b7\"" Jul 7 00:10:34.575302 containerd[1804]: time="2025-07-07T00:10:34.575288749Z" level=info msg="StartContainer for \"887bf4675d37b6187caf19d58847d8453d8e24966f8b46c68822899cd5e272b7\"" Jul 7 00:10:34.607498 systemd[1]: Started cri-containerd-887bf4675d37b6187caf19d58847d8453d8e24966f8b46c68822899cd5e272b7.scope - libcontainer container 887bf4675d37b6187caf19d58847d8453d8e24966f8b46c68822899cd5e272b7. Jul 7 00:10:34.662135 containerd[1804]: time="2025-07-07T00:10:34.662089929Z" level=info msg="StartContainer for \"887bf4675d37b6187caf19d58847d8453d8e24966f8b46c68822899cd5e272b7\" returns successfully" Jul 7 00:10:35.326281 kubelet[3067]: I0707 00:10:35.326209 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dd9sf" podStartSLOduration=2.3261988049999998 podStartE2EDuration="2.326198805s" podCreationTimestamp="2025-07-07 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:10:35.325871117 +0000 UTC m=+7.123907360" watchObservedRunningTime="2025-07-07 00:10:35.326198805 +0000 UTC m=+7.124235044" Jul 7 00:10:36.010597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968341731.mount: Deactivated successfully. Jul 7 00:10:36.313219 containerd[1804]: time="2025-07-07T00:10:36.313144038Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:36.313409 containerd[1804]: time="2025-07-07T00:10:36.313255014Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:10:36.313655 containerd[1804]: time="2025-07-07T00:10:36.313613143Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:36.314880 containerd[1804]: time="2025-07-07T00:10:36.314840210Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:36.315402 containerd[1804]: time="2025-07-07T00:10:36.315361559Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.806260895s" Jul 7 00:10:36.315402 containerd[1804]: time="2025-07-07T00:10:36.315376810Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:10:36.316264 containerd[1804]: time="2025-07-07T00:10:36.316249512Z" level=info msg="CreateContainer within sandbox \"59544dd389eb4a1004d89f1b0cac44229dc9328342fda39d9ecc5f6a3246abe2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:10:36.320079 containerd[1804]: time="2025-07-07T00:10:36.320064987Z" level=info msg="CreateContainer within sandbox \"59544dd389eb4a1004d89f1b0cac44229dc9328342fda39d9ecc5f6a3246abe2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a05891ec742f24e8c5df0b1dc296a70886ee6baacd757ae75de599275dfe3a5d\"" Jul 7 00:10:36.320277 containerd[1804]: time="2025-07-07T00:10:36.320265293Z" level=info msg="StartContainer for \"a05891ec742f24e8c5df0b1dc296a70886ee6baacd757ae75de599275dfe3a5d\"" Jul 7 00:10:36.352421 systemd[1]: Started cri-containerd-a05891ec742f24e8c5df0b1dc296a70886ee6baacd757ae75de599275dfe3a5d.scope - libcontainer container a05891ec742f24e8c5df0b1dc296a70886ee6baacd757ae75de599275dfe3a5d. Jul 7 00:10:36.365287 containerd[1804]: time="2025-07-07T00:10:36.365239798Z" level=info msg="StartContainer for \"a05891ec742f24e8c5df0b1dc296a70886ee6baacd757ae75de599275dfe3a5d\" returns successfully" Jul 7 00:10:37.342775 kubelet[3067]: I0707 00:10:37.342709 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-6pn9j" podStartSLOduration=2.535796865 podStartE2EDuration="4.342698942s" podCreationTimestamp="2025-07-07 00:10:33 +0000 UTC" firstStartedPulling="2025-07-07 00:10:34.508825698 +0000 UTC m=+6.306861947" lastFinishedPulling="2025-07-07 00:10:36.315727781 +0000 UTC m=+8.113764024" observedRunningTime="2025-07-07 00:10:37.342611457 +0000 UTC m=+9.140647700" watchObservedRunningTime="2025-07-07 00:10:37.342698942 +0000 UTC m=+9.140735181" Jul 7 00:10:40.545988 sudo[2082]: pam_unix(sudo:session): session closed for user root Jul 7 00:10:40.546942 sshd[2078]: pam_unix(sshd:session): session closed for user core Jul 7 00:10:40.549212 systemd[1]: sshd@8-147.28.180.255:22-139.178.89.65:56652.service: Deactivated successfully. Jul 7 00:10:40.550358 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:10:40.550562 systemd[1]: session-11.scope: Consumed 3.382s CPU time, 168.5M memory peak, 0B memory swap peak. Jul 7 00:10:40.551023 systemd-logind[1794]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:10:40.551635 systemd-logind[1794]: Removed session 11. Jul 7 00:10:41.103211 update_engine[1799]: I20250707 00:10:41.103147 1799 update_attempter.cc:509] Updating boot flags... Jul 7 00:10:41.132138 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3595) Jul 7 00:10:41.163136 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3599) Jul 7 00:10:42.600416 systemd[1]: Created slice kubepods-besteffort-pod33a33361_3bfb_44bd_9d44_200809ec4691.slice - libcontainer container kubepods-besteffort-pod33a33361_3bfb_44bd_9d44_200809ec4691.slice. Jul 7 00:10:42.689052 kubelet[3067]: I0707 00:10:42.688938 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33a33361-3bfb-44bd-9d44-200809ec4691-tigera-ca-bundle\") pod \"calico-typha-794499458c-2kqvt\" (UID: \"33a33361-3bfb-44bd-9d44-200809ec4691\") " pod="calico-system/calico-typha-794499458c-2kqvt" Jul 7 00:10:42.689052 kubelet[3067]: I0707 00:10:42.689030 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/33a33361-3bfb-44bd-9d44-200809ec4691-typha-certs\") pod \"calico-typha-794499458c-2kqvt\" (UID: \"33a33361-3bfb-44bd-9d44-200809ec4691\") " pod="calico-system/calico-typha-794499458c-2kqvt" Jul 7 00:10:42.689959 kubelet[3067]: I0707 00:10:42.689098 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqt6x\" (UniqueName: \"kubernetes.io/projected/33a33361-3bfb-44bd-9d44-200809ec4691-kube-api-access-jqt6x\") pod \"calico-typha-794499458c-2kqvt\" (UID: \"33a33361-3bfb-44bd-9d44-200809ec4691\") " pod="calico-system/calico-typha-794499458c-2kqvt" Jul 7 00:10:42.905631 containerd[1804]: time="2025-07-07T00:10:42.905405884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-794499458c-2kqvt,Uid:33a33361-3bfb-44bd-9d44-200809ec4691,Namespace:calico-system,Attempt:0,}" Jul 7 00:10:42.915952 containerd[1804]: time="2025-07-07T00:10:42.915889680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:10:42.915952 containerd[1804]: time="2025-07-07T00:10:42.915913935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:10:42.915952 containerd[1804]: time="2025-07-07T00:10:42.915920729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:42.916080 containerd[1804]: time="2025-07-07T00:10:42.915959565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:42.936353 systemd[1]: Started cri-containerd-1f67499921ce51ac663311f3531be11e0be5656fea610cfb42769a2e976e6b57.scope - libcontainer container 1f67499921ce51ac663311f3531be11e0be5656fea610cfb42769a2e976e6b57. Jul 7 00:10:42.939722 systemd[1]: Created slice kubepods-besteffort-pod4d8714d0_974e_4cb3_855a_40be2da1f9bc.slice - libcontainer container kubepods-besteffort-pod4d8714d0_974e_4cb3_855a_40be2da1f9bc.slice. Jul 7 00:10:42.960192 containerd[1804]: time="2025-07-07T00:10:42.960129066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-794499458c-2kqvt,Uid:33a33361-3bfb-44bd-9d44-200809ec4691,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f67499921ce51ac663311f3531be11e0be5656fea610cfb42769a2e976e6b57\"" Jul 7 00:10:42.960786 containerd[1804]: time="2025-07-07T00:10:42.960744969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:10:42.992311 kubelet[3067]: I0707 00:10:42.992260 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92ct\" (UniqueName: \"kubernetes.io/projected/4d8714d0-974e-4cb3-855a-40be2da1f9bc-kube-api-access-j92ct\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992311 kubelet[3067]: I0707 00:10:42.992287 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-policysync\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992311 kubelet[3067]: I0707 00:10:42.992299 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-lib-modules\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992311 kubelet[3067]: I0707 00:10:42.992310 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d8714d0-974e-4cb3-855a-40be2da1f9bc-tigera-ca-bundle\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992456 kubelet[3067]: I0707 00:10:42.992321 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-cni-net-dir\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992456 kubelet[3067]: I0707 00:10:42.992345 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-cni-log-dir\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992456 kubelet[3067]: I0707 00:10:42.992364 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4d8714d0-974e-4cb3-855a-40be2da1f9bc-node-certs\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992456 kubelet[3067]: I0707 00:10:42.992375 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-cni-bin-dir\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992456 kubelet[3067]: I0707 00:10:42.992386 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-flexvol-driver-host\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992557 kubelet[3067]: I0707 00:10:42.992396 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-xtables-lock\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992557 kubelet[3067]: I0707 00:10:42.992426 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-var-run-calico\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:42.992557 kubelet[3067]: I0707 00:10:42.992440 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d8714d0-974e-4cb3-855a-40be2da1f9bc-var-lib-calico\") pod \"calico-node-ckbbv\" (UID: \"4d8714d0-974e-4cb3-855a-40be2da1f9bc\") " pod="calico-system/calico-node-ckbbv" Jul 7 00:10:43.095508 kubelet[3067]: E0707 00:10:43.095425 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.095508 kubelet[3067]: W0707 00:10:43.095468 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.095810 kubelet[3067]: E0707 00:10:43.095535 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.100397 kubelet[3067]: E0707 00:10:43.100325 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.100397 kubelet[3067]: W0707 00:10:43.100361 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.100397 kubelet[3067]: E0707 00:10:43.100396 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.110692 kubelet[3067]: E0707 00:10:43.110604 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.110692 kubelet[3067]: W0707 00:10:43.110642 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.110692 kubelet[3067]: E0707 00:10:43.110677 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.207094 kubelet[3067]: E0707 00:10:43.206817 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvdv8" podUID="9c39c50a-eb2f-499c-b38e-71339392cd68" Jul 7 00:10:43.242914 containerd[1804]: time="2025-07-07T00:10:43.242788978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ckbbv,Uid:4d8714d0-974e-4cb3-855a-40be2da1f9bc,Namespace:calico-system,Attempt:0,}" Jul 7 00:10:43.287166 kubelet[3067]: E0707 00:10:43.287086 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.287166 kubelet[3067]: W0707 00:10:43.287165 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.287616 kubelet[3067]: E0707 00:10:43.287215 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.287789 kubelet[3067]: E0707 00:10:43.287729 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.287789 kubelet[3067]: W0707 00:10:43.287754 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.287789 kubelet[3067]: E0707 00:10:43.287781 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.288376 kubelet[3067]: E0707 00:10:43.288347 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.288376 kubelet[3067]: W0707 00:10:43.288375 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.288604 kubelet[3067]: E0707 00:10:43.288403 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.288988 kubelet[3067]: E0707 00:10:43.288956 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.288988 kubelet[3067]: W0707 00:10:43.288983 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.289211 kubelet[3067]: E0707 00:10:43.289010 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.289603 kubelet[3067]: E0707 00:10:43.289561 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.289764 kubelet[3067]: W0707 00:10:43.289600 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.289764 kubelet[3067]: E0707 00:10:43.289643 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.290226 kubelet[3067]: E0707 00:10:43.290193 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.290361 kubelet[3067]: W0707 00:10:43.290238 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.290361 kubelet[3067]: E0707 00:10:43.290283 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.290831 kubelet[3067]: E0707 00:10:43.290789 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.290831 kubelet[3067]: W0707 00:10:43.290817 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.291061 kubelet[3067]: E0707 00:10:43.290844 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.291357 kubelet[3067]: E0707 00:10:43.291315 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.291533 kubelet[3067]: W0707 00:10:43.291353 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.291533 kubelet[3067]: E0707 00:10:43.291398 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.291993 kubelet[3067]: E0707 00:10:43.291918 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.291993 kubelet[3067]: W0707 00:10:43.291962 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.292493 kubelet[3067]: E0707 00:10:43.292006 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.292728 kubelet[3067]: E0707 00:10:43.292675 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.292884 kubelet[3067]: W0707 00:10:43.292724 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.292884 kubelet[3067]: E0707 00:10:43.292780 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.293510 kubelet[3067]: E0707 00:10:43.293453 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.293510 kubelet[3067]: W0707 00:10:43.293504 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.294029 kubelet[3067]: E0707 00:10:43.293544 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.294195 kubelet[3067]: E0707 00:10:43.294153 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.294195 kubelet[3067]: W0707 00:10:43.294184 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.294575 kubelet[3067]: E0707 00:10:43.294222 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.294897 kubelet[3067]: E0707 00:10:43.294856 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.295069 kubelet[3067]: W0707 00:10:43.294897 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.295069 kubelet[3067]: E0707 00:10:43.294943 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.295417 containerd[1804]: time="2025-07-07T00:10:43.293772316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:10:43.295417 containerd[1804]: time="2025-07-07T00:10:43.295296566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:10:43.295749 containerd[1804]: time="2025-07-07T00:10:43.295374917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:43.295925 containerd[1804]: time="2025-07-07T00:10:43.295687821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:43.296121 kubelet[3067]: E0707 00:10:43.295736 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.296121 kubelet[3067]: W0707 00:10:43.295775 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.296121 kubelet[3067]: E0707 00:10:43.295818 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.296645 kubelet[3067]: E0707 00:10:43.296427 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.296645 kubelet[3067]: W0707 00:10:43.296466 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.296645 kubelet[3067]: E0707 00:10:43.296509 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.297215 kubelet[3067]: E0707 00:10:43.297114 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.297215 kubelet[3067]: W0707 00:10:43.297186 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.297515 kubelet[3067]: E0707 00:10:43.297232 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.297889 kubelet[3067]: E0707 00:10:43.297848 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.297889 kubelet[3067]: W0707 00:10:43.297884 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.298120 kubelet[3067]: E0707 00:10:43.297925 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.298527 kubelet[3067]: E0707 00:10:43.298478 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.298527 kubelet[3067]: W0707 00:10:43.298519 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.298753 kubelet[3067]: E0707 00:10:43.298561 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.299172 kubelet[3067]: E0707 00:10:43.299108 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.299300 kubelet[3067]: W0707 00:10:43.299175 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.299300 kubelet[3067]: E0707 00:10:43.299219 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.299813 kubelet[3067]: E0707 00:10:43.299760 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.299813 kubelet[3067]: W0707 00:10:43.299798 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.299998 kubelet[3067]: E0707 00:10:43.299841 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.300676 kubelet[3067]: E0707 00:10:43.300616 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.300676 kubelet[3067]: W0707 00:10:43.300652 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.300996 kubelet[3067]: E0707 00:10:43.300685 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.300996 kubelet[3067]: I0707 00:10:43.300742 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c39c50a-eb2f-499c-b38e-71339392cd68-kubelet-dir\") pod \"csi-node-driver-mvdv8\" (UID: \"9c39c50a-eb2f-499c-b38e-71339392cd68\") " pod="calico-system/csi-node-driver-mvdv8" Jul 7 00:10:43.301261 kubelet[3067]: E0707 00:10:43.301200 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.301261 kubelet[3067]: W0707 00:10:43.301227 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.301261 kubelet[3067]: E0707 00:10:43.301260 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.301518 kubelet[3067]: I0707 00:10:43.301308 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c39c50a-eb2f-499c-b38e-71339392cd68-registration-dir\") pod \"csi-node-driver-mvdv8\" (UID: \"9c39c50a-eb2f-499c-b38e-71339392cd68\") " pod="calico-system/csi-node-driver-mvdv8" Jul 7 00:10:43.301859 kubelet[3067]: E0707 00:10:43.301812 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.301859 kubelet[3067]: W0707 00:10:43.301855 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.302205 kubelet[3067]: E0707 00:10:43.301915 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.302648 kubelet[3067]: E0707 00:10:43.302489 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.302648 kubelet[3067]: W0707 00:10:43.302550 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.302648 kubelet[3067]: E0707 00:10:43.302612 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.303319 kubelet[3067]: E0707 00:10:43.303263 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.303525 kubelet[3067]: W0707 00:10:43.303322 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.303525 kubelet[3067]: E0707 00:10:43.303377 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.303525 kubelet[3067]: I0707 00:10:43.303449 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svp5f\" (UniqueName: \"kubernetes.io/projected/9c39c50a-eb2f-499c-b38e-71339392cd68-kube-api-access-svp5f\") pod \"csi-node-driver-mvdv8\" (UID: \"9c39c50a-eb2f-499c-b38e-71339392cd68\") " pod="calico-system/csi-node-driver-mvdv8" Jul 7 00:10:43.304345 kubelet[3067]: E0707 00:10:43.304281 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.304345 kubelet[3067]: W0707 00:10:43.304330 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.304588 kubelet[3067]: E0707 00:10:43.304421 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.304588 kubelet[3067]: I0707 00:10:43.304537 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c39c50a-eb2f-499c-b38e-71339392cd68-varrun\") pod \"csi-node-driver-mvdv8\" (UID: \"9c39c50a-eb2f-499c-b38e-71339392cd68\") " pod="calico-system/csi-node-driver-mvdv8" Jul 7 00:10:43.304997 kubelet[3067]: E0707 00:10:43.304944 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.304997 kubelet[3067]: W0707 00:10:43.304981 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.305198 kubelet[3067]: E0707 00:10:43.305062 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.305592 kubelet[3067]: E0707 00:10:43.305559 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.305592 kubelet[3067]: W0707 00:10:43.305589 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.305890 kubelet[3067]: E0707 00:10:43.305657 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.306102 kubelet[3067]: E0707 00:10:43.306076 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.306264 kubelet[3067]: W0707 00:10:43.306102 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.306264 kubelet[3067]: E0707 00:10:43.306198 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.306607 kubelet[3067]: E0707 00:10:43.306573 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.306607 kubelet[3067]: W0707 00:10:43.306598 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.306912 kubelet[3067]: E0707 00:10:43.306653 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.306912 kubelet[3067]: I0707 00:10:43.306710 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c39c50a-eb2f-499c-b38e-71339392cd68-socket-dir\") pod \"csi-node-driver-mvdv8\" (UID: \"9c39c50a-eb2f-499c-b38e-71339392cd68\") " pod="calico-system/csi-node-driver-mvdv8" Jul 7 00:10:43.307108 kubelet[3067]: E0707 00:10:43.306974 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.307108 kubelet[3067]: W0707 00:10:43.306990 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.307108 kubelet[3067]: E0707 00:10:43.307008 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.307572 kubelet[3067]: E0707 00:10:43.307539 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.307648 kubelet[3067]: W0707 00:10:43.307576 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.307648 kubelet[3067]: E0707 00:10:43.307619 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.308076 kubelet[3067]: E0707 00:10:43.308051 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.308177 kubelet[3067]: W0707 00:10:43.308082 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.308177 kubelet[3067]: E0707 00:10:43.308114 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.308604 kubelet[3067]: E0707 00:10:43.308582 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.308671 kubelet[3067]: W0707 00:10:43.308605 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.308671 kubelet[3067]: E0707 00:10:43.308627 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.308938 kubelet[3067]: E0707 00:10:43.308919 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.308938 kubelet[3067]: W0707 00:10:43.308936 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.309071 kubelet[3067]: E0707 00:10:43.308953 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.329321 systemd[1]: Started cri-containerd-bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85.scope - libcontainer container bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85. Jul 7 00:10:43.345953 containerd[1804]: time="2025-07-07T00:10:43.345898126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ckbbv,Uid:4d8714d0-974e-4cb3-855a-40be2da1f9bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85\"" Jul 7 00:10:43.408935 kubelet[3067]: E0707 00:10:43.408846 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.408935 kubelet[3067]: W0707 00:10:43.408887 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.408935 kubelet[3067]: E0707 00:10:43.408925 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.409563 kubelet[3067]: E0707 00:10:43.409525 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.409563 kubelet[3067]: W0707 00:10:43.409561 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.409820 kubelet[3067]: E0707 00:10:43.409604 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.410260 kubelet[3067]: E0707 00:10:43.410217 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.410260 kubelet[3067]: W0707 00:10:43.410257 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.410509 kubelet[3067]: E0707 00:10:43.410305 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.410803 kubelet[3067]: E0707 00:10:43.410770 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.410803 kubelet[3067]: W0707 00:10:43.410799 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.411003 kubelet[3067]: E0707 00:10:43.410833 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.411406 kubelet[3067]: E0707 00:10:43.411352 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.411406 kubelet[3067]: W0707 00:10:43.411379 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.411633 kubelet[3067]: E0707 00:10:43.411489 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.411860 kubelet[3067]: E0707 00:10:43.411802 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.411860 kubelet[3067]: W0707 00:10:43.411828 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.412046 kubelet[3067]: E0707 00:10:43.411929 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.412359 kubelet[3067]: E0707 00:10:43.412294 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.412359 kubelet[3067]: W0707 00:10:43.412319 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.412650 kubelet[3067]: E0707 00:10:43.412374 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.412936 kubelet[3067]: E0707 00:10:43.412887 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.412936 kubelet[3067]: W0707 00:10:43.412914 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.413180 kubelet[3067]: E0707 00:10:43.412946 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.413587 kubelet[3067]: E0707 00:10:43.413547 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.413587 kubelet[3067]: W0707 00:10:43.413584 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.413949 kubelet[3067]: E0707 00:10:43.413626 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.414262 kubelet[3067]: E0707 00:10:43.414218 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.414262 kubelet[3067]: W0707 00:10:43.414257 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.414598 kubelet[3067]: E0707 00:10:43.414308 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.414885 kubelet[3067]: E0707 00:10:43.414847 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.414885 kubelet[3067]: W0707 00:10:43.414876 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.415226 kubelet[3067]: E0707 00:10:43.414983 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.415411 kubelet[3067]: E0707 00:10:43.415374 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.415411 kubelet[3067]: W0707 00:10:43.415408 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.415692 kubelet[3067]: E0707 00:10:43.415472 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.415894 kubelet[3067]: E0707 00:10:43.415863 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.415894 kubelet[3067]: W0707 00:10:43.415893 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.416246 kubelet[3067]: E0707 00:10:43.415976 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.416468 kubelet[3067]: E0707 00:10:43.416434 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.416468 kubelet[3067]: W0707 00:10:43.416464 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.416760 kubelet[3067]: E0707 00:10:43.416578 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.417004 kubelet[3067]: E0707 00:10:43.416971 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.417004 kubelet[3067]: W0707 00:10:43.417001 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.417328 kubelet[3067]: E0707 00:10:43.417073 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.417551 kubelet[3067]: E0707 00:10:43.417512 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.417551 kubelet[3067]: W0707 00:10:43.417545 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.417835 kubelet[3067]: E0707 00:10:43.417631 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.418108 kubelet[3067]: E0707 00:10:43.418075 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.418108 kubelet[3067]: W0707 00:10:43.418106 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.418437 kubelet[3067]: E0707 00:10:43.418205 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.418673 kubelet[3067]: E0707 00:10:43.418639 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.418673 kubelet[3067]: W0707 00:10:43.418669 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.418977 kubelet[3067]: E0707 00:10:43.418772 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.419268 kubelet[3067]: E0707 00:10:43.419229 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.419268 kubelet[3067]: W0707 00:10:43.419267 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.419617 kubelet[3067]: E0707 00:10:43.419359 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.419942 kubelet[3067]: E0707 00:10:43.419904 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.419942 kubelet[3067]: W0707 00:10:43.419936 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.420248 kubelet[3067]: E0707 00:10:43.420016 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.420501 kubelet[3067]: E0707 00:10:43.420460 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.420501 kubelet[3067]: W0707 00:10:43.420491 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.420802 kubelet[3067]: E0707 00:10:43.420581 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.421027 kubelet[3067]: E0707 00:10:43.420990 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.421027 kubelet[3067]: W0707 00:10:43.421019 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.421358 kubelet[3067]: E0707 00:10:43.421065 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.421707 kubelet[3067]: E0707 00:10:43.421653 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.421707 kubelet[3067]: W0707 00:10:43.421684 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.421975 kubelet[3067]: E0707 00:10:43.421721 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.422352 kubelet[3067]: E0707 00:10:43.422315 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.422352 kubelet[3067]: W0707 00:10:43.422343 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.422660 kubelet[3067]: E0707 00:10:43.422377 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.423056 kubelet[3067]: E0707 00:10:43.423024 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.423225 kubelet[3067]: W0707 00:10:43.423057 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.423225 kubelet[3067]: E0707 00:10:43.423090 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:43.438552 kubelet[3067]: E0707 00:10:43.438503 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:43.438552 kubelet[3067]: W0707 00:10:43.438543 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:43.438855 kubelet[3067]: E0707 00:10:43.438588 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:44.575617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820149465.mount: Deactivated successfully. Jul 7 00:10:45.293229 containerd[1804]: time="2025-07-07T00:10:45.293178180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:45.293484 containerd[1804]: time="2025-07-07T00:10:45.293439437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:10:45.293878 containerd[1804]: time="2025-07-07T00:10:45.293841306Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:45.294767 containerd[1804]: time="2025-07-07T00:10:45.294732490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:45.295174 containerd[1804]: time="2025-07-07T00:10:45.295155742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.334393595s" Jul 7 00:10:45.295205 containerd[1804]: time="2025-07-07T00:10:45.295171882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:10:45.295682 containerd[1804]: time="2025-07-07T00:10:45.295644001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:10:45.298539 containerd[1804]: time="2025-07-07T00:10:45.298522243Z" level=info msg="CreateContainer within sandbox \"1f67499921ce51ac663311f3531be11e0be5656fea610cfb42769a2e976e6b57\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:10:45.299675 kubelet[3067]: E0707 00:10:45.299658 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvdv8" podUID="9c39c50a-eb2f-499c-b38e-71339392cd68" Jul 7 00:10:45.302756 containerd[1804]: time="2025-07-07T00:10:45.302709520Z" level=info msg="CreateContainer within sandbox \"1f67499921ce51ac663311f3531be11e0be5656fea610cfb42769a2e976e6b57\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4702a05dae978859a5cb2660077cde77bc054caed776f4891f342445a6a513d8\"" Jul 7 00:10:45.303007 containerd[1804]: time="2025-07-07T00:10:45.302995885Z" level=info msg="StartContainer for \"4702a05dae978859a5cb2660077cde77bc054caed776f4891f342445a6a513d8\"" Jul 7 00:10:45.337667 systemd[1]: Started cri-containerd-4702a05dae978859a5cb2660077cde77bc054caed776f4891f342445a6a513d8.scope - libcontainer container 4702a05dae978859a5cb2660077cde77bc054caed776f4891f342445a6a513d8. Jul 7 00:10:45.421025 containerd[1804]: time="2025-07-07T00:10:45.421004140Z" level=info msg="StartContainer for \"4702a05dae978859a5cb2660077cde77bc054caed776f4891f342445a6a513d8\" returns successfully" Jul 7 00:10:46.422863 kubelet[3067]: E0707 00:10:46.422812 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.422863 kubelet[3067]: W0707 00:10:46.422828 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.422863 kubelet[3067]: E0707 00:10:46.422846 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.423167 kubelet[3067]: E0707 00:10:46.423023 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.423167 kubelet[3067]: W0707 00:10:46.423029 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.423167 kubelet[3067]: E0707 00:10:46.423036 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.423167 kubelet[3067]: E0707 00:10:46.423153 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.423167 kubelet[3067]: W0707 00:10:46.423158 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.423167 kubelet[3067]: E0707 00:10:46.423164 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.423391 kubelet[3067]: E0707 00:10:46.423351 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.423391 kubelet[3067]: W0707 00:10:46.423360 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.423391 kubelet[3067]: E0707 00:10:46.423368 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.423596 kubelet[3067]: E0707 00:10:46.423550 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.423596 kubelet[3067]: W0707 00:10:46.423558 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.423596 kubelet[3067]: E0707 00:10:46.423566 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.423747 kubelet[3067]: E0707 00:10:46.423708 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.423747 kubelet[3067]: W0707 00:10:46.423714 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.423747 kubelet[3067]: E0707 00:10:46.423721 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.423910 kubelet[3067]: E0707 00:10:46.423880 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.423910 kubelet[3067]: W0707 00:10:46.423885 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.423910 kubelet[3067]: E0707 00:10:46.423891 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.423992 kubelet[3067]: E0707 00:10:46.423986 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.424017 kubelet[3067]: W0707 00:10:46.423992 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.424017 kubelet[3067]: E0707 00:10:46.423997 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.424093 kubelet[3067]: E0707 00:10:46.424088 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.424117 kubelet[3067]: W0707 00:10:46.424093 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.424117 kubelet[3067]: E0707 00:10:46.424099 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.424243 kubelet[3067]: E0707 00:10:46.424237 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.424243 kubelet[3067]: W0707 00:10:46.424243 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.424287 kubelet[3067]: E0707 00:10:46.424248 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.424375 kubelet[3067]: E0707 00:10:46.424369 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.424397 kubelet[3067]: W0707 00:10:46.424375 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.424397 kubelet[3067]: E0707 00:10:46.424383 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.424476 kubelet[3067]: E0707 00:10:46.424470 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.424502 kubelet[3067]: W0707 00:10:46.424476 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.424502 kubelet[3067]: E0707 00:10:46.424481 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.424572 kubelet[3067]: E0707 00:10:46.424566 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.424596 kubelet[3067]: W0707 00:10:46.424572 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.424596 kubelet[3067]: E0707 00:10:46.424577 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.424665 kubelet[3067]: E0707 00:10:46.424660 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.424688 kubelet[3067]: W0707 00:10:46.424665 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.424688 kubelet[3067]: E0707 00:10:46.424671 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.424759 kubelet[3067]: E0707 00:10:46.424753 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.424784 kubelet[3067]: W0707 00:10:46.424759 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.424784 kubelet[3067]: E0707 00:10:46.424764 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.440322 kubelet[3067]: E0707 00:10:46.440287 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.440322 kubelet[3067]: W0707 00:10:46.440299 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.440322 kubelet[3067]: E0707 00:10:46.440311 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.440573 kubelet[3067]: E0707 00:10:46.440535 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.440573 kubelet[3067]: W0707 00:10:46.440546 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.440573 kubelet[3067]: E0707 00:10:46.440558 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.440852 kubelet[3067]: E0707 00:10:46.440806 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.440852 kubelet[3067]: W0707 00:10:46.440818 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.440852 kubelet[3067]: E0707 00:10:46.440832 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.441094 kubelet[3067]: E0707 00:10:46.441054 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.441094 kubelet[3067]: W0707 00:10:46.441063 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.441094 kubelet[3067]: E0707 00:10:46.441073 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.441275 kubelet[3067]: E0707 00:10:46.441257 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.441275 kubelet[3067]: W0707 00:10:46.441268 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.441342 kubelet[3067]: E0707 00:10:46.441280 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.441519 kubelet[3067]: E0707 00:10:46.441479 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.441519 kubelet[3067]: W0707 00:10:46.441489 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.441519 kubelet[3067]: E0707 00:10:46.441501 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.441725 kubelet[3067]: E0707 00:10:46.441688 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.441725 kubelet[3067]: W0707 00:10:46.441698 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.441725 kubelet[3067]: E0707 00:10:46.441718 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.441860 kubelet[3067]: E0707 00:10:46.441852 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.441894 kubelet[3067]: W0707 00:10:46.441860 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.441925 kubelet[3067]: E0707 00:10:46.441898 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.442047 kubelet[3067]: E0707 00:10:46.442038 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.442088 kubelet[3067]: W0707 00:10:46.442047 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.442088 kubelet[3067]: E0707 00:10:46.442064 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.442196 kubelet[3067]: E0707 00:10:46.442188 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.442196 kubelet[3067]: W0707 00:10:46.442196 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.442264 kubelet[3067]: E0707 00:10:46.442209 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.442404 kubelet[3067]: E0707 00:10:46.442394 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.442438 kubelet[3067]: W0707 00:10:46.442406 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.442438 kubelet[3067]: E0707 00:10:46.442421 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.442599 kubelet[3067]: E0707 00:10:46.442591 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.442631 kubelet[3067]: W0707 00:10:46.442599 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.442631 kubelet[3067]: E0707 00:10:46.442611 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.442763 kubelet[3067]: E0707 00:10:46.442754 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.442800 kubelet[3067]: W0707 00:10:46.442762 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.442800 kubelet[3067]: E0707 00:10:46.442773 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.442985 kubelet[3067]: E0707 00:10:46.442975 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.443018 kubelet[3067]: W0707 00:10:46.442986 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.443018 kubelet[3067]: E0707 00:10:46.443000 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.443147 kubelet[3067]: E0707 00:10:46.443137 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.443147 kubelet[3067]: W0707 00:10:46.443147 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.443230 kubelet[3067]: E0707 00:10:46.443157 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.443342 kubelet[3067]: E0707 00:10:46.443333 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.443342 kubelet[3067]: W0707 00:10:46.443342 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.443410 kubelet[3067]: E0707 00:10:46.443353 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.443570 kubelet[3067]: E0707 00:10:46.443559 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.443570 kubelet[3067]: W0707 00:10:46.443569 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.443655 kubelet[3067]: E0707 00:10:46.443579 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:46.443746 kubelet[3067]: E0707 00:10:46.443737 3067 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:10:46.443746 kubelet[3067]: W0707 00:10:46.443746 3067 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:10:46.443813 kubelet[3067]: E0707 00:10:46.443755 3067 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:10:47.299729 kubelet[3067]: E0707 00:10:47.299704 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvdv8" podUID="9c39c50a-eb2f-499c-b38e-71339392cd68" Jul 7 00:10:47.317648 containerd[1804]: time="2025-07-07T00:10:47.317623773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:47.317959 containerd[1804]: time="2025-07-07T00:10:47.317940647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:10:47.318327 containerd[1804]: time="2025-07-07T00:10:47.318290402Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:47.319215 containerd[1804]: time="2025-07-07T00:10:47.319177413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:47.319614 containerd[1804]: time="2025-07-07T00:10:47.319598673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.023937406s" Jul 7 00:10:47.319646 containerd[1804]: time="2025-07-07T00:10:47.319620562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:10:47.320573 containerd[1804]: time="2025-07-07T00:10:47.320560872Z" level=info msg="CreateContainer within sandbox \"bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:10:47.325259 containerd[1804]: time="2025-07-07T00:10:47.325242267Z" level=info msg="CreateContainer within sandbox \"bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554\"" Jul 7 00:10:47.325515 containerd[1804]: time="2025-07-07T00:10:47.325500538Z" level=info msg="StartContainer for \"49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554\"" Jul 7 00:10:47.350454 systemd[1]: Started cri-containerd-49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554.scope - libcontainer container 49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554. Jul 7 00:10:47.357038 kubelet[3067]: I0707 00:10:47.357024 3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:10:47.362836 containerd[1804]: time="2025-07-07T00:10:47.362813872Z" level=info msg="StartContainer for \"49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554\" returns successfully" Jul 7 00:10:47.367190 systemd[1]: cri-containerd-49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554.scope: Deactivated successfully. Jul 7 00:10:47.378389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554-rootfs.mount: Deactivated successfully. Jul 7 00:10:47.803828 containerd[1804]: time="2025-07-07T00:10:47.803777613Z" level=info msg="shim disconnected" id=49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554 namespace=k8s.io Jul 7 00:10:47.803828 containerd[1804]: time="2025-07-07T00:10:47.803817263Z" level=warning msg="cleaning up after shim disconnected" id=49bf23337481d6607ed7ad61958e0aa560f291468e440dd580e013ac38798554 namespace=k8s.io Jul 7 00:10:47.803828 containerd[1804]: time="2025-07-07T00:10:47.803823462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:10:48.363679 containerd[1804]: time="2025-07-07T00:10:48.363601002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:10:48.371793 kubelet[3067]: I0707 00:10:48.371755 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-794499458c-2kqvt" podStartSLOduration=4.036812881 podStartE2EDuration="6.371744566s" podCreationTimestamp="2025-07-07 00:10:42 +0000 UTC" firstStartedPulling="2025-07-07 00:10:42.96063301 +0000 UTC m=+14.758669252" lastFinishedPulling="2025-07-07 00:10:45.295564695 +0000 UTC m=+17.093600937" observedRunningTime="2025-07-07 00:10:46.377828808 +0000 UTC m=+18.175865172" watchObservedRunningTime="2025-07-07 00:10:48.371744566 +0000 UTC m=+20.169780807" Jul 7 00:10:49.299846 kubelet[3067]: E0707 00:10:49.299750 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvdv8" podUID="9c39c50a-eb2f-499c-b38e-71339392cd68" Jul 7 00:10:51.299546 kubelet[3067]: E0707 00:10:51.299521 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvdv8" podUID="9c39c50a-eb2f-499c-b38e-71339392cd68" Jul 7 00:10:51.499604 containerd[1804]: time="2025-07-07T00:10:51.499554111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:51.499820 containerd[1804]: time="2025-07-07T00:10:51.499719971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:10:51.500147 containerd[1804]: time="2025-07-07T00:10:51.500108881Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:51.501083 containerd[1804]: time="2025-07-07T00:10:51.501068025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:51.501470 containerd[1804]: time="2025-07-07T00:10:51.501454271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.137790115s" Jul 7 00:10:51.501524 containerd[1804]: time="2025-07-07T00:10:51.501471712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:10:51.502479 containerd[1804]: time="2025-07-07T00:10:51.502465627Z" level=info msg="CreateContainer within sandbox \"bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:10:51.507864 containerd[1804]: time="2025-07-07T00:10:51.507821796Z" level=info msg="CreateContainer within sandbox \"bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9\"" Jul 7 00:10:51.508048 containerd[1804]: time="2025-07-07T00:10:51.508035764Z" level=info msg="StartContainer for \"22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9\"" Jul 7 00:10:51.534470 systemd[1]: Started cri-containerd-22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9.scope - libcontainer container 22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9. Jul 7 00:10:51.547306 containerd[1804]: time="2025-07-07T00:10:51.547278631Z" level=info msg="StartContainer for \"22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9\" returns successfully" Jul 7 00:10:52.145535 systemd[1]: cri-containerd-22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9.scope: Deactivated successfully. Jul 7 00:10:52.148273 kubelet[3067]: I0707 00:10:52.148260 3067 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:10:52.156274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9-rootfs.mount: Deactivated successfully. Jul 7 00:10:52.163024 systemd[1]: Created slice kubepods-burstable-podf702355d_417f_4caf_86a7_f40f67775a26.slice - libcontainer container kubepods-burstable-podf702355d_417f_4caf_86a7_f40f67775a26.slice. Jul 7 00:10:52.165789 systemd[1]: Created slice kubepods-besteffort-pod02f7ed80_c14c_4630_a553_ace08c648a2b.slice - libcontainer container kubepods-besteffort-pod02f7ed80_c14c_4630_a553_ace08c648a2b.slice. Jul 7 00:10:52.168437 systemd[1]: Created slice kubepods-burstable-pod8ff5f712_5346_4f8c_8f1a_94e4806cd738.slice - libcontainer container kubepods-burstable-pod8ff5f712_5346_4f8c_8f1a_94e4806cd738.slice. Jul 7 00:10:52.170849 systemd[1]: Created slice kubepods-besteffort-pod33756664_bad3_4e93_964e_584b092ec7ee.slice - libcontainer container kubepods-besteffort-pod33756664_bad3_4e93_964e_584b092ec7ee.slice. Jul 7 00:10:52.173457 systemd[1]: Created slice kubepods-besteffort-poda5989df0_3d41_4e07_823a_56249763eb4e.slice - libcontainer container kubepods-besteffort-poda5989df0_3d41_4e07_823a_56249763eb4e.slice. Jul 7 00:10:52.176135 systemd[1]: Created slice kubepods-besteffort-podc74f75d9_0067_422a_a233_ade5735b2645.slice - libcontainer container kubepods-besteffort-podc74f75d9_0067_422a_a233_ade5735b2645.slice. Jul 7 00:10:52.178598 systemd[1]: Created slice kubepods-besteffort-pod2c89e35d_ce3a_44df_8fe5_08a58c4b851d.slice - libcontainer container kubepods-besteffort-pod2c89e35d_ce3a_44df_8fe5_08a58c4b851d.slice. Jul 7 00:10:52.186213 kubelet[3067]: I0707 00:10:52.185369 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33756664-bad3-4e93-964e-584b092ec7ee-whisker-ca-bundle\") pod \"whisker-56d4f4c756-5qjv5\" (UID: \"33756664-bad3-4e93-964e-584b092ec7ee\") " pod="calico-system/whisker-56d4f4c756-5qjv5" Jul 7 00:10:52.186213 kubelet[3067]: I0707 00:10:52.185417 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvsl\" (UniqueName: \"kubernetes.io/projected/8ff5f712-5346-4f8c-8f1a-94e4806cd738-kube-api-access-9xvsl\") pod \"coredns-668d6bf9bc-t4xs8\" (UID: \"8ff5f712-5346-4f8c-8f1a-94e4806cd738\") " pod="kube-system/coredns-668d6bf9bc-t4xs8" Jul 7 00:10:52.186213 kubelet[3067]: I0707 00:10:52.185439 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2c89e35d-ce3a-44df-8fe5-08a58c4b851d-calico-apiserver-certs\") pod \"calico-apiserver-898df8c5d-mjxn2\" (UID: \"2c89e35d-ce3a-44df-8fe5-08a58c4b851d\") " pod="calico-apiserver/calico-apiserver-898df8c5d-mjxn2" Jul 7 00:10:52.186213 kubelet[3067]: I0707 00:10:52.185460 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbmp\" (UniqueName: \"kubernetes.io/projected/f702355d-417f-4caf-86a7-f40f67775a26-kube-api-access-kpbmp\") pod \"coredns-668d6bf9bc-2vkrn\" (UID: \"f702355d-417f-4caf-86a7-f40f67775a26\") " pod="kube-system/coredns-668d6bf9bc-2vkrn" Jul 7 00:10:52.186213 kubelet[3067]: I0707 00:10:52.185477 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ff5f712-5346-4f8c-8f1a-94e4806cd738-config-volume\") pod \"coredns-668d6bf9bc-t4xs8\" (UID: \"8ff5f712-5346-4f8c-8f1a-94e4806cd738\") " pod="kube-system/coredns-668d6bf9bc-t4xs8" Jul 7 00:10:52.186449 kubelet[3067]: I0707 00:10:52.185495 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcv6r\" (UniqueName: \"kubernetes.io/projected/2c89e35d-ce3a-44df-8fe5-08a58c4b851d-kube-api-access-xcv6r\") pod \"calico-apiserver-898df8c5d-mjxn2\" (UID: \"2c89e35d-ce3a-44df-8fe5-08a58c4b851d\") " pod="calico-apiserver/calico-apiserver-898df8c5d-mjxn2" Jul 7 00:10:52.186449 kubelet[3067]: I0707 00:10:52.185514 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f702355d-417f-4caf-86a7-f40f67775a26-config-volume\") pod \"coredns-668d6bf9bc-2vkrn\" (UID: \"f702355d-417f-4caf-86a7-f40f67775a26\") " pod="kube-system/coredns-668d6bf9bc-2vkrn" Jul 7 00:10:52.186449 kubelet[3067]: I0707 00:10:52.185533 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c74f75d9-0067-422a-a233-ade5735b2645-goldmane-key-pair\") pod \"goldmane-768f4c5c69-xjqj5\" (UID: \"c74f75d9-0067-422a-a233-ade5735b2645\") " pod="calico-system/goldmane-768f4c5c69-xjqj5" Jul 7 00:10:52.186449 kubelet[3067]: I0707 00:10:52.185549 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33756664-bad3-4e93-964e-584b092ec7ee-whisker-backend-key-pair\") pod \"whisker-56d4f4c756-5qjv5\" (UID: \"33756664-bad3-4e93-964e-584b092ec7ee\") " pod="calico-system/whisker-56d4f4c756-5qjv5" Jul 7 00:10:52.186449 kubelet[3067]: I0707 00:10:52.185577 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfn5r\" (UniqueName: \"kubernetes.io/projected/33756664-bad3-4e93-964e-584b092ec7ee-kube-api-access-xfn5r\") pod \"whisker-56d4f4c756-5qjv5\" (UID: \"33756664-bad3-4e93-964e-584b092ec7ee\") " pod="calico-system/whisker-56d4f4c756-5qjv5" Jul 7 00:10:52.186558 kubelet[3067]: I0707 00:10:52.185593 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsgx\" (UniqueName: \"kubernetes.io/projected/c74f75d9-0067-422a-a233-ade5735b2645-kube-api-access-5xsgx\") pod \"goldmane-768f4c5c69-xjqj5\" (UID: \"c74f75d9-0067-422a-a233-ade5735b2645\") " pod="calico-system/goldmane-768f4c5c69-xjqj5" Jul 7 00:10:52.186558 kubelet[3067]: I0707 00:10:52.185609 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02f7ed80-c14c-4630-a553-ace08c648a2b-tigera-ca-bundle\") pod \"calico-kube-controllers-57c6d9946c-d62bp\" (UID: \"02f7ed80-c14c-4630-a553-ace08c648a2b\") " pod="calico-system/calico-kube-controllers-57c6d9946c-d62bp" Jul 7 00:10:52.186558 kubelet[3067]: I0707 00:10:52.185624 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c74f75d9-0067-422a-a233-ade5735b2645-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-xjqj5\" (UID: \"c74f75d9-0067-422a-a233-ade5735b2645\") " pod="calico-system/goldmane-768f4c5c69-xjqj5" Jul 7 00:10:52.186558 kubelet[3067]: I0707 00:10:52.185641 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ssjw\" (UniqueName: \"kubernetes.io/projected/02f7ed80-c14c-4630-a553-ace08c648a2b-kube-api-access-7ssjw\") pod \"calico-kube-controllers-57c6d9946c-d62bp\" (UID: \"02f7ed80-c14c-4630-a553-ace08c648a2b\") " pod="calico-system/calico-kube-controllers-57c6d9946c-d62bp" Jul 7 00:10:52.186558 kubelet[3067]: I0707 00:10:52.185660 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5989df0-3d41-4e07-823a-56249763eb4e-calico-apiserver-certs\") pod \"calico-apiserver-898df8c5d-x44jj\" (UID: \"a5989df0-3d41-4e07-823a-56249763eb4e\") " pod="calico-apiserver/calico-apiserver-898df8c5d-x44jj" Jul 7 00:10:52.186721 kubelet[3067]: I0707 00:10:52.185676 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c22v2\" (UniqueName: \"kubernetes.io/projected/a5989df0-3d41-4e07-823a-56249763eb4e-kube-api-access-c22v2\") pod \"calico-apiserver-898df8c5d-x44jj\" (UID: \"a5989df0-3d41-4e07-823a-56249763eb4e\") " pod="calico-apiserver/calico-apiserver-898df8c5d-x44jj" Jul 7 00:10:52.186721 kubelet[3067]: I0707 00:10:52.185691 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c74f75d9-0067-422a-a233-ade5735b2645-config\") pod \"goldmane-768f4c5c69-xjqj5\" (UID: \"c74f75d9-0067-422a-a233-ade5735b2645\") " pod="calico-system/goldmane-768f4c5c69-xjqj5" Jul 7 00:10:52.466500 containerd[1804]: time="2025-07-07T00:10:52.466256970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vkrn,Uid:f702355d-417f-4caf-86a7-f40f67775a26,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:52.468549 containerd[1804]: time="2025-07-07T00:10:52.468425275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c6d9946c-d62bp,Uid:02f7ed80-c14c-4630-a553-ace08c648a2b,Namespace:calico-system,Attempt:0,}" Jul 7 00:10:52.470685 containerd[1804]: time="2025-07-07T00:10:52.470652080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4xs8,Uid:8ff5f712-5346-4f8c-8f1a-94e4806cd738,Namespace:kube-system,Attempt:0,}" Jul 7 00:10:52.473047 containerd[1804]: time="2025-07-07T00:10:52.473034831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d4f4c756-5qjv5,Uid:33756664-bad3-4e93-964e-584b092ec7ee,Namespace:calico-system,Attempt:0,}" Jul 7 00:10:52.475493 containerd[1804]: time="2025-07-07T00:10:52.475470901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-898df8c5d-x44jj,Uid:a5989df0-3d41-4e07-823a-56249763eb4e,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:10:52.477597 containerd[1804]: time="2025-07-07T00:10:52.477558120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xjqj5,Uid:c74f75d9-0067-422a-a233-ade5735b2645,Namespace:calico-system,Attempt:0,}" Jul 7 00:10:52.480994 containerd[1804]: time="2025-07-07T00:10:52.480942614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-898df8c5d-mjxn2,Uid:2c89e35d-ce3a-44df-8fe5-08a58c4b851d,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:10:52.529930 containerd[1804]: time="2025-07-07T00:10:52.529876861Z" level=info msg="shim disconnected" id=22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9 namespace=k8s.io Jul 7 00:10:52.529930 containerd[1804]: time="2025-07-07T00:10:52.529927629Z" level=warning msg="cleaning up after shim disconnected" id=22b22690ea69caf294c2aa833d9167f2de29c5b07047b0047c7975ac9ac1d8e9 namespace=k8s.io Jul 7 00:10:52.530113 containerd[1804]: time="2025-07-07T00:10:52.529933163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:10:52.573849 containerd[1804]: time="2025-07-07T00:10:52.573815402Z" level=error msg="Failed to destroy network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.573849 containerd[1804]: time="2025-07-07T00:10:52.573841214Z" level=error msg="Failed to destroy network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574072 containerd[1804]: time="2025-07-07T00:10:52.574054049Z" level=error msg="encountered an error cleaning up failed sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574106 containerd[1804]: time="2025-07-07T00:10:52.574090368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d4f4c756-5qjv5,Uid:33756664-bad3-4e93-964e-584b092ec7ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574176 containerd[1804]: time="2025-07-07T00:10:52.574063845Z" level=error msg="encountered an error cleaning up failed sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574212 containerd[1804]: time="2025-07-07T00:10:52.574170494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4xs8,Uid:8ff5f712-5346-4f8c-8f1a-94e4806cd738,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574212 containerd[1804]: time="2025-07-07T00:10:52.574182292Z" level=error msg="Failed to destroy network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574268 kubelet[3067]: E0707 00:10:52.574243 3067 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574486 kubelet[3067]: E0707 00:10:52.574295 3067 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d4f4c756-5qjv5" Jul 7 00:10:52.574486 kubelet[3067]: E0707 00:10:52.574310 3067 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d4f4c756-5qjv5" Jul 7 00:10:52.574486 kubelet[3067]: E0707 00:10:52.574243 3067 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574486 kubelet[3067]: E0707 00:10:52.574342 3067 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t4xs8" Jul 7 00:10:52.574577 containerd[1804]: time="2025-07-07T00:10:52.574432539Z" level=error msg="encountered an error cleaning up failed sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574577 containerd[1804]: time="2025-07-07T00:10:52.574464508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vkrn,Uid:f702355d-417f-4caf-86a7-f40f67775a26,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574621 kubelet[3067]: E0707 00:10:52.574355 3067 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t4xs8" Jul 7 00:10:52.574621 kubelet[3067]: E0707 00:10:52.574374 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t4xs8_kube-system(8ff5f712-5346-4f8c-8f1a-94e4806cd738)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t4xs8_kube-system(8ff5f712-5346-4f8c-8f1a-94e4806cd738)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t4xs8" podUID="8ff5f712-5346-4f8c-8f1a-94e4806cd738" Jul 7 00:10:52.574621 kubelet[3067]: E0707 00:10:52.574337 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56d4f4c756-5qjv5_calico-system(33756664-bad3-4e93-964e-584b092ec7ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56d4f4c756-5qjv5_calico-system(33756664-bad3-4e93-964e-584b092ec7ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d4f4c756-5qjv5" podUID="33756664-bad3-4e93-964e-584b092ec7ee" Jul 7 00:10:52.574710 kubelet[3067]: E0707 00:10:52.574556 3067 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.574710 kubelet[3067]: E0707 00:10:52.574577 3067 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2vkrn" Jul 7 00:10:52.574710 kubelet[3067]: E0707 00:10:52.574590 3067 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2vkrn" Jul 7 00:10:52.574769 kubelet[3067]: E0707 00:10:52.574614 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2vkrn_kube-system(f702355d-417f-4caf-86a7-f40f67775a26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2vkrn_kube-system(f702355d-417f-4caf-86a7-f40f67775a26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2vkrn" podUID="f702355d-417f-4caf-86a7-f40f67775a26" Jul 7 00:10:52.574918 containerd[1804]: time="2025-07-07T00:10:52.574901206Z" level=error msg="Failed to destroy network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575038 containerd[1804]: time="2025-07-07T00:10:52.575024786Z" level=error msg="Failed to destroy network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575121 containerd[1804]: time="2025-07-07T00:10:52.575104119Z" level=error msg="encountered an error cleaning up failed sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575165 containerd[1804]: time="2025-07-07T00:10:52.575137127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c6d9946c-d62bp,Uid:02f7ed80-c14c-4630-a553-ace08c648a2b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575222 containerd[1804]: time="2025-07-07T00:10:52.575196968Z" level=error msg="encountered an error cleaning up failed sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575257 containerd[1804]: time="2025-07-07T00:10:52.575225721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xjqj5,Uid:c74f75d9-0067-422a-a233-ade5735b2645,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575296 kubelet[3067]: E0707 00:10:52.575213 3067 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575296 kubelet[3067]: E0707 00:10:52.575242 3067 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c6d9946c-d62bp" Jul 7 00:10:52.575296 kubelet[3067]: E0707 00:10:52.575260 3067 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c6d9946c-d62bp" Jul 7 00:10:52.575296 kubelet[3067]: E0707 00:10:52.575285 3067 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575403 kubelet[3067]: E0707 00:10:52.575289 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57c6d9946c-d62bp_calico-system(02f7ed80-c14c-4630-a553-ace08c648a2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57c6d9946c-d62bp_calico-system(02f7ed80-c14c-4630-a553-ace08c648a2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57c6d9946c-d62bp" podUID="02f7ed80-c14c-4630-a553-ace08c648a2b" Jul 7 00:10:52.575403 kubelet[3067]: E0707 00:10:52.575301 3067 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xjqj5" Jul 7 00:10:52.575403 kubelet[3067]: E0707 00:10:52.575315 3067 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xjqj5" Jul 7 00:10:52.575474 kubelet[3067]: E0707 00:10:52.575331 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-xjqj5_calico-system(c74f75d9-0067-422a-a233-ade5735b2645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-xjqj5_calico-system(c74f75d9-0067-422a-a233-ade5735b2645)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xjqj5" podUID="c74f75d9-0067-422a-a233-ade5735b2645" Jul 7 00:10:52.575800 containerd[1804]: time="2025-07-07T00:10:52.575778746Z" level=error msg="Failed to destroy network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.575968 containerd[1804]: time="2025-07-07T00:10:52.575950061Z" level=error msg="encountered an error cleaning up failed sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.576009 containerd[1804]: time="2025-07-07T00:10:52.575981212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-898df8c5d-x44jj,Uid:a5989df0-3d41-4e07-823a-56249763eb4e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.576063 kubelet[3067]: E0707 00:10:52.576040 3067 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.576104 kubelet[3067]: E0707 00:10:52.576062 3067 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-898df8c5d-x44jj" Jul 7 00:10:52.576104 kubelet[3067]: E0707 00:10:52.576071 3067 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-898df8c5d-x44jj" Jul 7 00:10:52.576104 kubelet[3067]: E0707 00:10:52.576088 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-898df8c5d-x44jj_calico-apiserver(a5989df0-3d41-4e07-823a-56249763eb4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-898df8c5d-x44jj_calico-apiserver(a5989df0-3d41-4e07-823a-56249763eb4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-898df8c5d-x44jj" podUID="a5989df0-3d41-4e07-823a-56249763eb4e" Jul 7 00:10:52.576468 containerd[1804]: time="2025-07-07T00:10:52.576454642Z" level=error msg="Failed to destroy network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.576595 containerd[1804]: time="2025-07-07T00:10:52.576583029Z" level=error msg="encountered an error cleaning up failed sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.576618 containerd[1804]: time="2025-07-07T00:10:52.576603651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-898df8c5d-mjxn2,Uid:2c89e35d-ce3a-44df-8fe5-08a58c4b851d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.576664 kubelet[3067]: E0707 00:10:52.576655 3067 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:52.576687 kubelet[3067]: E0707 00:10:52.576669 3067 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-898df8c5d-mjxn2" Jul 7 00:10:52.576687 kubelet[3067]: E0707 00:10:52.576676 3067 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-898df8c5d-mjxn2" Jul 7 00:10:52.576726 kubelet[3067]: E0707 00:10:52.576690 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-898df8c5d-mjxn2_calico-apiserver(2c89e35d-ce3a-44df-8fe5-08a58c4b851d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-898df8c5d-mjxn2_calico-apiserver(2c89e35d-ce3a-44df-8fe5-08a58c4b851d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-898df8c5d-mjxn2" podUID="2c89e35d-ce3a-44df-8fe5-08a58c4b851d" Jul 7 00:10:53.314676 systemd[1]: Created slice kubepods-besteffort-pod9c39c50a_eb2f_499c_b38e_71339392cd68.slice - libcontainer container kubepods-besteffort-pod9c39c50a_eb2f_499c_b38e_71339392cd68.slice. Jul 7 00:10:53.320538 containerd[1804]: time="2025-07-07T00:10:53.320392006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvdv8,Uid:9c39c50a-eb2f-499c-b38e-71339392cd68,Namespace:calico-system,Attempt:0,}" Jul 7 00:10:53.348551 containerd[1804]: time="2025-07-07T00:10:53.348494275Z" level=error msg="Failed to destroy network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.348736 containerd[1804]: time="2025-07-07T00:10:53.348686921Z" level=error msg="encountered an error cleaning up failed sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.348736 containerd[1804]: time="2025-07-07T00:10:53.348719145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvdv8,Uid:9c39c50a-eb2f-499c-b38e-71339392cd68,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.348933 kubelet[3067]: E0707 00:10:53.348887 3067 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.348933 kubelet[3067]: E0707 00:10:53.348925 3067 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvdv8" Jul 7 00:10:53.349001 kubelet[3067]: E0707 00:10:53.348943 3067 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvdv8" Jul 7 00:10:53.349001 kubelet[3067]: E0707 00:10:53.348969 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mvdv8_calico-system(9c39c50a-eb2f-499c-b38e-71339392cd68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mvdv8_calico-system(9c39c50a-eb2f-499c-b38e-71339392cd68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvdv8" podUID="9c39c50a-eb2f-499c-b38e-71339392cd68" Jul 7 00:10:53.385397 kubelet[3067]: I0707 00:10:53.385336 3067 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:10:53.385963 containerd[1804]: time="2025-07-07T00:10:53.385924036Z" level=info msg="StopPodSandbox for \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\"" Jul 7 00:10:53.386191 containerd[1804]: time="2025-07-07T00:10:53.386168248Z" level=info msg="Ensure that sandbox f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369 in task-service has been cleanup successfully" Jul 7 00:10:53.386275 kubelet[3067]: I0707 00:10:53.386237 3067 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:10:53.387224 containerd[1804]: time="2025-07-07T00:10:53.387145474Z" level=info msg="StopPodSandbox for \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\"" Jul 7 00:10:53.387613 containerd[1804]: time="2025-07-07T00:10:53.387570830Z" level=info msg="Ensure that sandbox ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2 in task-service has been cleanup successfully" Jul 7 00:10:53.388823 kubelet[3067]: I0707 00:10:53.388790 3067 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:10:53.389752 containerd[1804]: time="2025-07-07T00:10:53.389706604Z" level=info msg="StopPodSandbox for \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\"" Jul 7 00:10:53.390031 containerd[1804]: time="2025-07-07T00:10:53.389995738Z" level=info msg="Ensure that sandbox f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59 in task-service has been cleanup successfully" Jul 7 00:10:53.393597 kubelet[3067]: I0707 00:10:53.393547 3067 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:10:53.393886 containerd[1804]: time="2025-07-07T00:10:53.393823940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:10:53.394451 containerd[1804]: time="2025-07-07T00:10:53.394388606Z" level=info msg="StopPodSandbox for \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\"" Jul 7 00:10:53.394596 containerd[1804]: time="2025-07-07T00:10:53.394584491Z" level=info msg="Ensure that sandbox 55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604 in task-service has been cleanup successfully" Jul 7 00:10:53.394820 kubelet[3067]: I0707 00:10:53.394806 3067 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:10:53.395140 containerd[1804]: time="2025-07-07T00:10:53.395111646Z" level=info msg="StopPodSandbox for \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\"" Jul 7 00:10:53.395269 containerd[1804]: time="2025-07-07T00:10:53.395254261Z" level=info msg="Ensure that sandbox 9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838 in task-service has been cleanup successfully" Jul 7 00:10:53.395458 kubelet[3067]: I0707 00:10:53.395442 3067 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:10:53.395747 containerd[1804]: time="2025-07-07T00:10:53.395726162Z" level=info msg="StopPodSandbox for \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\"" Jul 7 00:10:53.395869 containerd[1804]: time="2025-07-07T00:10:53.395858012Z" level=info msg="Ensure that sandbox 7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579 in task-service has been cleanup successfully" Jul 7 00:10:53.396065 kubelet[3067]: I0707 00:10:53.396052 3067 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:10:53.396422 containerd[1804]: time="2025-07-07T00:10:53.396400430Z" level=info msg="StopPodSandbox for \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\"" Jul 7 00:10:53.396557 containerd[1804]: time="2025-07-07T00:10:53.396544261Z" level=info msg="Ensure that sandbox 1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533 in task-service has been cleanup successfully" Jul 7 00:10:53.396690 kubelet[3067]: I0707 00:10:53.396678 3067 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:10:53.397033 containerd[1804]: time="2025-07-07T00:10:53.397012553Z" level=info msg="StopPodSandbox for \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\"" Jul 7 00:10:53.397188 containerd[1804]: time="2025-07-07T00:10:53.397173451Z" level=info msg="Ensure that sandbox 2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5 in task-service has been cleanup successfully" Jul 7 00:10:53.409192 containerd[1804]: time="2025-07-07T00:10:53.409151000Z" level=error msg="StopPodSandbox for \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\" failed" error="failed to destroy network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.409329 kubelet[3067]: E0707 00:10:53.409308 3067 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:10:53.409386 kubelet[3067]: E0707 00:10:53.409345 3067 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369"} Jul 7 00:10:53.409419 kubelet[3067]: E0707 00:10:53.409399 3067 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c74f75d9-0067-422a-a233-ade5735b2645\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:10:53.409482 kubelet[3067]: E0707 00:10:53.409421 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c74f75d9-0067-422a-a233-ade5735b2645\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xjqj5" podUID="c74f75d9-0067-422a-a233-ade5735b2645" Jul 7 00:10:53.409676 containerd[1804]: time="2025-07-07T00:10:53.409656259Z" level=error msg="StopPodSandbox for \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\" failed" error="failed to destroy network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.409768 kubelet[3067]: E0707 00:10:53.409753 3067 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:10:53.409810 containerd[1804]: time="2025-07-07T00:10:53.409751288Z" level=error msg="StopPodSandbox for \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\" failed" error="failed to destroy network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.409845 kubelet[3067]: E0707 00:10:53.409773 3067 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59"} Jul 7 00:10:53.409845 kubelet[3067]: E0707 00:10:53.409794 3067 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c89e35d-ce3a-44df-8fe5-08a58c4b851d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:10:53.409845 kubelet[3067]: E0707 00:10:53.409811 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c89e35d-ce3a-44df-8fe5-08a58c4b851d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-898df8c5d-mjxn2" podUID="2c89e35d-ce3a-44df-8fe5-08a58c4b851d" Jul 7 00:10:53.409845 kubelet[3067]: E0707 00:10:53.409819 3067 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:10:53.409845 kubelet[3067]: E0707 00:10:53.409842 3067 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2"} Jul 7 00:10:53.410022 kubelet[3067]: E0707 00:10:53.409865 3067 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33756664-bad3-4e93-964e-584b092ec7ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:10:53.410022 kubelet[3067]: E0707 00:10:53.409886 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33756664-bad3-4e93-964e-584b092ec7ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d4f4c756-5qjv5" podUID="33756664-bad3-4e93-964e-584b092ec7ee" Jul 7 00:10:53.411698 containerd[1804]: time="2025-07-07T00:10:53.411668965Z" level=error msg="StopPodSandbox for \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\" failed" error="failed to destroy network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.411770 containerd[1804]: time="2025-07-07T00:10:53.411734535Z" level=error msg="StopPodSandbox for \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\" failed" error="failed to destroy network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.411847 kubelet[3067]: E0707 00:10:53.411826 3067 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:10:53.411880 kubelet[3067]: E0707 00:10:53.411859 3067 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838"} Jul 7 00:10:53.411898 kubelet[3067]: E0707 00:10:53.411878 3067 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c39c50a-eb2f-499c-b38e-71339392cd68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:10:53.411898 kubelet[3067]: E0707 00:10:53.411893 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c39c50a-eb2f-499c-b38e-71339392cd68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvdv8" podUID="9c39c50a-eb2f-499c-b38e-71339392cd68" Jul 7 00:10:53.411967 kubelet[3067]: E0707 00:10:53.411829 3067 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:10:53.411967 kubelet[3067]: E0707 00:10:53.411910 3067 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579"} Jul 7 00:10:53.411967 kubelet[3067]: E0707 00:10:53.411920 3067 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5989df0-3d41-4e07-823a-56249763eb4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:10:53.411967 kubelet[3067]: E0707 00:10:53.411929 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5989df0-3d41-4e07-823a-56249763eb4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-898df8c5d-x44jj" podUID="a5989df0-3d41-4e07-823a-56249763eb4e" Jul 7 00:10:53.412054 containerd[1804]: time="2025-07-07T00:10:53.411911064Z" level=error msg="StopPodSandbox for \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\" failed" error="failed to destroy network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.412077 kubelet[3067]: E0707 00:10:53.411963 3067 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:10:53.412077 kubelet[3067]: E0707 00:10:53.411974 3067 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604"} Jul 7 00:10:53.412077 kubelet[3067]: E0707 00:10:53.411987 3067 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ff5f712-5346-4f8c-8f1a-94e4806cd738\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:10:53.412077 kubelet[3067]: E0707 00:10:53.411997 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ff5f712-5346-4f8c-8f1a-94e4806cd738\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t4xs8" podUID="8ff5f712-5346-4f8c-8f1a-94e4806cd738" Jul 7 00:10:53.412763 containerd[1804]: time="2025-07-07T00:10:53.412747107Z" level=error msg="StopPodSandbox for \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\" failed" error="failed to destroy network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.412823 kubelet[3067]: E0707 00:10:53.412812 3067 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:10:53.412846 kubelet[3067]: E0707 00:10:53.412828 3067 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5"} Jul 7 00:10:53.412865 kubelet[3067]: E0707 00:10:53.412845 3067 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f702355d-417f-4caf-86a7-f40f67775a26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:10:53.412865 kubelet[3067]: E0707 00:10:53.412856 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f702355d-417f-4caf-86a7-f40f67775a26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2vkrn" podUID="f702355d-417f-4caf-86a7-f40f67775a26" Jul 7 00:10:53.413078 containerd[1804]: time="2025-07-07T00:10:53.413063960Z" level=error msg="StopPodSandbox for \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\" failed" error="failed to destroy network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:10:53.413131 kubelet[3067]: E0707 00:10:53.413116 3067 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:10:53.413152 kubelet[3067]: E0707 00:10:53.413135 3067 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533"} Jul 7 00:10:53.413152 kubelet[3067]: E0707 00:10:53.413149 3067 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02f7ed80-c14c-4630-a553-ace08c648a2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:10:53.413194 kubelet[3067]: E0707 00:10:53.413160 3067 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02f7ed80-c14c-4630-a553-ace08c648a2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57c6d9946c-d62bp" podUID="02f7ed80-c14c-4630-a553-ace08c648a2b" Jul 7 00:10:53.514202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59-shm.mount: Deactivated successfully. Jul 7 00:10:53.514468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369-shm.mount: Deactivated successfully. Jul 7 00:10:53.514669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579-shm.mount: Deactivated successfully. Jul 7 00:10:53.514854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2-shm.mount: Deactivated successfully. Jul 7 00:10:53.515035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533-shm.mount: Deactivated successfully. Jul 7 00:10:53.515254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5-shm.mount: Deactivated successfully. Jul 7 00:10:53.515443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604-shm.mount: Deactivated successfully. Jul 7 00:10:53.612925 kubelet[3067]: I0707 00:10:53.612687 3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:10:58.774370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678878486.mount: Deactivated successfully. Jul 7 00:10:58.808780 containerd[1804]: time="2025-07-07T00:10:58.808726633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:58.809093 containerd[1804]: time="2025-07-07T00:10:58.809078028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:10:58.809455 containerd[1804]: time="2025-07-07T00:10:58.809413887Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:58.810300 containerd[1804]: time="2025-07-07T00:10:58.810258777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:10:58.810912 containerd[1804]: time="2025-07-07T00:10:58.810868871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.416975831s" Jul 7 00:10:58.810912 containerd[1804]: time="2025-07-07T00:10:58.810884190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:10:58.814183 containerd[1804]: time="2025-07-07T00:10:58.814164775Z" level=info msg="CreateContainer within sandbox \"bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:10:58.819721 containerd[1804]: time="2025-07-07T00:10:58.819692497Z" level=info msg="CreateContainer within sandbox \"bb931834930d1915f9ed09b747012c36a8ed2448405ba99747c95f4f400d4a85\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d17aad8c4d55e41df1e745b9c4a5a4da780af9742f692670eb68acd869fc3db0\"" Jul 7 00:10:58.819992 containerd[1804]: time="2025-07-07T00:10:58.819976784Z" level=info msg="StartContainer for \"d17aad8c4d55e41df1e745b9c4a5a4da780af9742f692670eb68acd869fc3db0\"" Jul 7 00:10:58.840443 systemd[1]: Started cri-containerd-d17aad8c4d55e41df1e745b9c4a5a4da780af9742f692670eb68acd869fc3db0.scope - libcontainer container d17aad8c4d55e41df1e745b9c4a5a4da780af9742f692670eb68acd869fc3db0. Jul 7 00:10:58.856290 containerd[1804]: time="2025-07-07T00:10:58.856229406Z" level=info msg="StartContainer for \"d17aad8c4d55e41df1e745b9c4a5a4da780af9742f692670eb68acd869fc3db0\" returns successfully" Jul 7 00:10:58.930403 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:10:58.930459 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:10:58.966992 containerd[1804]: time="2025-07-07T00:10:58.966968300Z" level=info msg="StopPodSandbox for \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\"" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:58.991 [INFO][4629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:58.991 [INFO][4629] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" iface="eth0" netns="/var/run/netns/cni-d7add28a-d4af-f205-915c-9f74d927202f" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:58.991 [INFO][4629] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" iface="eth0" netns="/var/run/netns/cni-d7add28a-d4af-f205-915c-9f74d927202f" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:58.992 [INFO][4629] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" iface="eth0" netns="/var/run/netns/cni-d7add28a-d4af-f205-915c-9f74d927202f" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:58.992 [INFO][4629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:58.992 [INFO][4629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:59.002 [INFO][4660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:59.002 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:59.002 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:59.005 [WARNING][4660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:59.005 [INFO][4660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:59.006 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:10:59.008943 containerd[1804]: 2025-07-07 00:10:59.007 [INFO][4629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:10:59.009224 containerd[1804]: time="2025-07-07T00:10:59.009021715Z" level=info msg="TearDown network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\" successfully" Jul 7 00:10:59.009224 containerd[1804]: time="2025-07-07T00:10:59.009036187Z" level=info msg="StopPodSandbox for \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\" returns successfully" Jul 7 00:10:59.134457 kubelet[3067]: I0707 00:10:59.134278 3067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfn5r\" (UniqueName: \"kubernetes.io/projected/33756664-bad3-4e93-964e-584b092ec7ee-kube-api-access-xfn5r\") pod \"33756664-bad3-4e93-964e-584b092ec7ee\" (UID: \"33756664-bad3-4e93-964e-584b092ec7ee\") " Jul 7 00:10:59.134457 kubelet[3067]: I0707 00:10:59.134399 3067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33756664-bad3-4e93-964e-584b092ec7ee-whisker-backend-key-pair\") pod \"33756664-bad3-4e93-964e-584b092ec7ee\" (UID: \"33756664-bad3-4e93-964e-584b092ec7ee\") " Jul 7 00:10:59.135498 kubelet[3067]: I0707 00:10:59.134482 3067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33756664-bad3-4e93-964e-584b092ec7ee-whisker-ca-bundle\") pod \"33756664-bad3-4e93-964e-584b092ec7ee\" (UID: \"33756664-bad3-4e93-964e-584b092ec7ee\") " Jul 7 00:10:59.135498 kubelet[3067]: I0707 00:10:59.135383 3067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33756664-bad3-4e93-964e-584b092ec7ee-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "33756664-bad3-4e93-964e-584b092ec7ee" (UID: "33756664-bad3-4e93-964e-584b092ec7ee"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:10:59.140395 kubelet[3067]: I0707 00:10:59.140294 3067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33756664-bad3-4e93-964e-584b092ec7ee-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "33756664-bad3-4e93-964e-584b092ec7ee" (UID: "33756664-bad3-4e93-964e-584b092ec7ee"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:10:59.140395 kubelet[3067]: I0707 00:10:59.140324 3067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33756664-bad3-4e93-964e-584b092ec7ee-kube-api-access-xfn5r" (OuterVolumeSpecName: "kube-api-access-xfn5r") pod "33756664-bad3-4e93-964e-584b092ec7ee" (UID: "33756664-bad3-4e93-964e-584b092ec7ee"). InnerVolumeSpecName "kube-api-access-xfn5r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:10:59.235986 kubelet[3067]: I0707 00:10:59.235915 3067 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33756664-bad3-4e93-964e-584b092ec7ee-whisker-backend-key-pair\") on node \"ci-4081.3.4-a-fd0ee851f3\" DevicePath \"\"" Jul 7 00:10:59.235986 kubelet[3067]: I0707 00:10:59.235983 3067 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33756664-bad3-4e93-964e-584b092ec7ee-whisker-ca-bundle\") on node \"ci-4081.3.4-a-fd0ee851f3\" DevicePath \"\"" Jul 7 00:10:59.236368 kubelet[3067]: I0707 00:10:59.236014 3067 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfn5r\" (UniqueName: \"kubernetes.io/projected/33756664-bad3-4e93-964e-584b092ec7ee-kube-api-access-xfn5r\") on node \"ci-4081.3.4-a-fd0ee851f3\" DevicePath \"\"" Jul 7 00:10:59.425599 systemd[1]: Removed slice kubepods-besteffort-pod33756664_bad3_4e93_964e_584b092ec7ee.slice - libcontainer container kubepods-besteffort-pod33756664_bad3_4e93_964e_584b092ec7ee.slice. Jul 7 00:10:59.437556 kubelet[3067]: I0707 00:10:59.437312 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ckbbv" podStartSLOduration=1.97291463 podStartE2EDuration="17.437297578s" podCreationTimestamp="2025-07-07 00:10:42 +0000 UTC" firstStartedPulling="2025-07-07 00:10:43.346807662 +0000 UTC m=+15.144843916" lastFinishedPulling="2025-07-07 00:10:58.811190622 +0000 UTC m=+30.609226864" observedRunningTime="2025-07-07 00:10:59.436825698 +0000 UTC m=+31.234861948" watchObservedRunningTime="2025-07-07 00:10:59.437297578 +0000 UTC m=+31.235333819" Jul 7 00:10:59.457070 systemd[1]: Created slice kubepods-besteffort-pod11e486bd_1d21_4472_a0f4_4931fb4b31d3.slice - libcontainer container kubepods-besteffort-pod11e486bd_1d21_4472_a0f4_4931fb4b31d3.slice. Jul 7 00:10:59.538101 kubelet[3067]: I0707 00:10:59.537982 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11e486bd-1d21-4472-a0f4-4931fb4b31d3-whisker-ca-bundle\") pod \"whisker-79c575bd5b-6w59f\" (UID: \"11e486bd-1d21-4472-a0f4-4931fb4b31d3\") " pod="calico-system/whisker-79c575bd5b-6w59f" Jul 7 00:10:59.538380 kubelet[3067]: I0707 00:10:59.538120 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wngw2\" (UniqueName: \"kubernetes.io/projected/11e486bd-1d21-4472-a0f4-4931fb4b31d3-kube-api-access-wngw2\") pod \"whisker-79c575bd5b-6w59f\" (UID: \"11e486bd-1d21-4472-a0f4-4931fb4b31d3\") " pod="calico-system/whisker-79c575bd5b-6w59f" Jul 7 00:10:59.538484 kubelet[3067]: I0707 00:10:59.538392 3067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11e486bd-1d21-4472-a0f4-4931fb4b31d3-whisker-backend-key-pair\") pod \"whisker-79c575bd5b-6w59f\" (UID: \"11e486bd-1d21-4472-a0f4-4931fb4b31d3\") " pod="calico-system/whisker-79c575bd5b-6w59f" Jul 7 00:10:59.760064 containerd[1804]: time="2025-07-07T00:10:59.759981956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c575bd5b-6w59f,Uid:11e486bd-1d21-4472-a0f4-4931fb4b31d3,Namespace:calico-system,Attempt:0,}" Jul 7 00:10:59.777066 systemd[1]: run-netns-cni\x2dd7add28a\x2dd4af\x2df205\x2d915c\x2d9f74d927202f.mount: Deactivated successfully. Jul 7 00:10:59.777120 systemd[1]: var-lib-kubelet-pods-33756664\x2dbad3\x2d4e93\x2d964e\x2d584b092ec7ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxfn5r.mount: Deactivated successfully. Jul 7 00:10:59.777220 systemd[1]: var-lib-kubelet-pods-33756664\x2dbad3\x2d4e93\x2d964e\x2d584b092ec7ee-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:10:59.823634 systemd-networkd[1606]: cali2155a63df02: Link UP Jul 7 00:10:59.823817 systemd-networkd[1606]: cali2155a63df02: Gained carrier Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.777 [INFO][4688] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.785 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0 whisker-79c575bd5b- calico-system 11e486bd-1d21-4472-a0f4-4931fb4b31d3 864 0 2025-07-07 00:10:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79c575bd5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-a-fd0ee851f3 whisker-79c575bd5b-6w59f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2155a63df02 [] [] }} ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Namespace="calico-system" Pod="whisker-79c575bd5b-6w59f" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.785 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Namespace="calico-system" Pod="whisker-79c575bd5b-6w59f" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.798 [INFO][4709] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" HandleID="k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.798 [INFO][4709] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" HandleID="k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-fd0ee851f3", "pod":"whisker-79c575bd5b-6w59f", "timestamp":"2025-07-07 00:10:59.798738743 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fd0ee851f3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.798 [INFO][4709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.798 [INFO][4709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.798 [INFO][4709] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fd0ee851f3' Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.803 [INFO][4709] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.806 [INFO][4709] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.809 [INFO][4709] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.811 [INFO][4709] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.812 [INFO][4709] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.812 [INFO][4709] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.813 [INFO][4709] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.816 [INFO][4709] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.818 [INFO][4709] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.193/26] block=192.168.19.192/26 handle="k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.818 [INFO][4709] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.193/26] handle="k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.818 [INFO][4709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:10:59.829623 containerd[1804]: 2025-07-07 00:10:59.818 [INFO][4709] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.193/26] IPv6=[] ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" HandleID="k8s-pod-network.e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" Jul 7 00:10:59.830263 containerd[1804]: 2025-07-07 00:10:59.819 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Namespace="calico-system" Pod="whisker-79c575bd5b-6w59f" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0", GenerateName:"whisker-79c575bd5b-", Namespace:"calico-system", SelfLink:"", UID:"11e486bd-1d21-4472-a0f4-4931fb4b31d3", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c575bd5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"", Pod:"whisker-79c575bd5b-6w59f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2155a63df02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:10:59.830263 containerd[1804]: 2025-07-07 00:10:59.819 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.193/32] ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Namespace="calico-system" Pod="whisker-79c575bd5b-6w59f" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" Jul 7 00:10:59.830263 containerd[1804]: 2025-07-07 00:10:59.819 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2155a63df02 ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Namespace="calico-system" Pod="whisker-79c575bd5b-6w59f" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" Jul 7 00:10:59.830263 containerd[1804]: 2025-07-07 00:10:59.823 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Namespace="calico-system" Pod="whisker-79c575bd5b-6w59f" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" Jul 7 00:10:59.830263 containerd[1804]: 2025-07-07 00:10:59.824 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Namespace="calico-system" Pod="whisker-79c575bd5b-6w59f" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0", GenerateName:"whisker-79c575bd5b-", Namespace:"calico-system", SelfLink:"", UID:"11e486bd-1d21-4472-a0f4-4931fb4b31d3", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c575bd5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd", Pod:"whisker-79c575bd5b-6w59f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2155a63df02", MAC:"16:82:93:50:1e:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:10:59.830263 containerd[1804]: 2025-07-07 00:10:59.828 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd" Namespace="calico-system" Pod="whisker-79c575bd5b-6w59f" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--79c575bd5b--6w59f-eth0" Jul 7 00:10:59.837696 containerd[1804]: time="2025-07-07T00:10:59.837646890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:10:59.837890 containerd[1804]: time="2025-07-07T00:10:59.837874987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:10:59.837918 containerd[1804]: time="2025-07-07T00:10:59.837885633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:59.837936 containerd[1804]: time="2025-07-07T00:10:59.837927963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:10:59.860666 systemd[1]: Started cri-containerd-e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd.scope - libcontainer container e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd. Jul 7 00:10:59.952261 containerd[1804]: time="2025-07-07T00:10:59.952206154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c575bd5b-6w59f,Uid:11e486bd-1d21-4472-a0f4-4931fb4b31d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd\"" Jul 7 00:10:59.952985 containerd[1804]: time="2025-07-07T00:10:59.952970640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:11:00.116190 kernel: bpftool[4934]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:11:00.272224 systemd-networkd[1606]: vxlan.calico: Link UP Jul 7 00:11:00.272228 systemd-networkd[1606]: vxlan.calico: Gained carrier Jul 7 00:11:00.300394 kubelet[3067]: I0707 00:11:00.300374 3067 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33756664-bad3-4e93-964e-584b092ec7ee" path="/var/lib/kubelet/pods/33756664-bad3-4e93-964e-584b092ec7ee/volumes" Jul 7 00:11:00.415799 kubelet[3067]: I0707 00:11:00.415785 3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:11:01.486091 containerd[1804]: time="2025-07-07T00:11:01.486043794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:01.486341 containerd[1804]: time="2025-07-07T00:11:01.486269734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:11:01.486705 containerd[1804]: time="2025-07-07T00:11:01.486663353Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:01.487999 containerd[1804]: time="2025-07-07T00:11:01.487964267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:01.488503 containerd[1804]: time="2025-07-07T00:11:01.488466840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.535478013s" Jul 7 00:11:01.488503 containerd[1804]: time="2025-07-07T00:11:01.488499766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:11:01.489701 containerd[1804]: time="2025-07-07T00:11:01.489688561Z" level=info msg="CreateContainer within sandbox \"e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:11:01.494109 containerd[1804]: time="2025-07-07T00:11:01.494096320Z" level=info msg="CreateContainer within sandbox \"e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1d504dc956fb7422cf17ba24ec1f6fcf898af5baf9354c453d26c6ff880bf370\"" Jul 7 00:11:01.494406 containerd[1804]: time="2025-07-07T00:11:01.494353043Z" level=info msg="StartContainer for \"1d504dc956fb7422cf17ba24ec1f6fcf898af5baf9354c453d26c6ff880bf370\"" Jul 7 00:11:01.521400 systemd[1]: Started cri-containerd-1d504dc956fb7422cf17ba24ec1f6fcf898af5baf9354c453d26c6ff880bf370.scope - libcontainer container 1d504dc956fb7422cf17ba24ec1f6fcf898af5baf9354c453d26c6ff880bf370. Jul 7 00:11:01.547206 systemd-networkd[1606]: cali2155a63df02: Gained IPv6LL Jul 7 00:11:01.552502 containerd[1804]: time="2025-07-07T00:11:01.552455221Z" level=info msg="StartContainer for \"1d504dc956fb7422cf17ba24ec1f6fcf898af5baf9354c453d26c6ff880bf370\" returns successfully" Jul 7 00:11:01.553208 containerd[1804]: time="2025-07-07T00:11:01.553181069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:11:01.930472 systemd-networkd[1606]: vxlan.calico: Gained IPv6LL Jul 7 00:11:04.006989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175341473.mount: Deactivated successfully. Jul 7 00:11:04.011358 containerd[1804]: time="2025-07-07T00:11:04.011306745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:04.011513 containerd[1804]: time="2025-07-07T00:11:04.011416114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:11:04.011806 containerd[1804]: time="2025-07-07T00:11:04.011794633Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:04.012974 containerd[1804]: time="2025-07-07T00:11:04.012959841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:04.013438 containerd[1804]: time="2025-07-07T00:11:04.013401345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.460200086s" Jul 7 00:11:04.013438 containerd[1804]: time="2025-07-07T00:11:04.013417112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:11:04.014347 containerd[1804]: time="2025-07-07T00:11:04.014303930Z" level=info msg="CreateContainer within sandbox \"e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:11:04.018085 containerd[1804]: time="2025-07-07T00:11:04.018033245Z" level=info msg="CreateContainer within sandbox \"e74716f2288a9c6f6a1f0a7accb7fbae5e5e6b62efcbc117226ca3b010c5befd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"92cda7a608f47f5516a9ea818112d67d280223328dfbcf55c46f81b917441e3e\"" Jul 7 00:11:04.018301 containerd[1804]: time="2025-07-07T00:11:04.018287800Z" level=info msg="StartContainer for \"92cda7a608f47f5516a9ea818112d67d280223328dfbcf55c46f81b917441e3e\"" Jul 7 00:11:04.042370 systemd[1]: Started cri-containerd-92cda7a608f47f5516a9ea818112d67d280223328dfbcf55c46f81b917441e3e.scope - libcontainer container 92cda7a608f47f5516a9ea818112d67d280223328dfbcf55c46f81b917441e3e. Jul 7 00:11:04.065910 containerd[1804]: time="2025-07-07T00:11:04.065881744Z" level=info msg="StartContainer for \"92cda7a608f47f5516a9ea818112d67d280223328dfbcf55c46f81b917441e3e\" returns successfully" Jul 7 00:11:04.300687 containerd[1804]: time="2025-07-07T00:11:04.300448838Z" level=info msg="StopPodSandbox for \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\"" Jul 7 00:11:04.300990 containerd[1804]: time="2025-07-07T00:11:04.300785943Z" level=info msg="StopPodSandbox for \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\"" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" iface="eth0" netns="/var/run/netns/cni-9ea76612-7add-2c73-9191-23f0a70100d2" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" iface="eth0" netns="/var/run/netns/cni-9ea76612-7add-2c73-9191-23f0a70100d2" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" iface="eth0" netns="/var/run/netns/cni-9ea76612-7add-2c73-9191-23f0a70100d2" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.345 [INFO][5202] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.345 [INFO][5202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.346 [INFO][5202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.349 [WARNING][5202] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.349 [INFO][5202] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.350 [INFO][5202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:04.351920 containerd[1804]: 2025-07-07 00:11:04.351 [INFO][5173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:04.352245 containerd[1804]: time="2025-07-07T00:11:04.351999941Z" level=info msg="TearDown network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\" successfully" Jul 7 00:11:04.352245 containerd[1804]: time="2025-07-07T00:11:04.352016436Z" level=info msg="StopPodSandbox for \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\" returns successfully" Jul 7 00:11:04.352372 containerd[1804]: time="2025-07-07T00:11:04.352360477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xjqj5,Uid:c74f75d9-0067-422a-a233-ade5735b2645,Namespace:calico-system,Attempt:1,}" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.335 [INFO][5172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.335 [INFO][5172] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" iface="eth0" netns="/var/run/netns/cni-b266447a-25d5-c5fd-b5b2-9be045926f52" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5172] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" iface="eth0" netns="/var/run/netns/cni-b266447a-25d5-c5fd-b5b2-9be045926f52" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5172] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" iface="eth0" netns="/var/run/netns/cni-b266447a-25d5-c5fd-b5b2-9be045926f52" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.336 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.345 [INFO][5200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.346 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.350 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.353 [WARNING][5200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.353 [INFO][5200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.354 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:04.355623 containerd[1804]: 2025-07-07 00:11:04.354 [INFO][5172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:04.355859 containerd[1804]: time="2025-07-07T00:11:04.355679342Z" level=info msg="TearDown network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\" successfully" Jul 7 00:11:04.355859 containerd[1804]: time="2025-07-07T00:11:04.355690485Z" level=info msg="StopPodSandbox for \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\" returns successfully" Jul 7 00:11:04.355968 containerd[1804]: time="2025-07-07T00:11:04.355957458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-898df8c5d-mjxn2,Uid:2c89e35d-ce3a-44df-8fe5-08a58c4b851d,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:11:04.421344 systemd-networkd[1606]: cali3cc0a1cd1f5: Link UP Jul 7 00:11:04.421559 systemd-networkd[1606]: cali3cc0a1cd1f5: Gained carrier Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.386 [INFO][5240] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0 calico-apiserver-898df8c5d- calico-apiserver 2c89e35d-ce3a-44df-8fe5-08a58c4b851d 886 0 2025-07-07 00:10:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:898df8c5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-fd0ee851f3 calico-apiserver-898df8c5d-mjxn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3cc0a1cd1f5 [] [] }} ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-mjxn2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.387 [INFO][5240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-mjxn2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.399 [INFO][5275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" HandleID="k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.399 [INFO][5275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" HandleID="k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f4e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-fd0ee851f3", "pod":"calico-apiserver-898df8c5d-mjxn2", "timestamp":"2025-07-07 00:11:04.399745891 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fd0ee851f3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.399 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.399 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.399 [INFO][5275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fd0ee851f3' Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.404 [INFO][5275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.407 [INFO][5275] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.410 [INFO][5275] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.412 [INFO][5275] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.413 [INFO][5275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.413 [INFO][5275] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.414 [INFO][5275] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22 Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.416 [INFO][5275] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.419 [INFO][5275] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.194/26] block=192.168.19.192/26 handle="k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.419 [INFO][5275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.194/26] handle="k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.419 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:04.427943 containerd[1804]: 2025-07-07 00:11:04.419 [INFO][5275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.194/26] IPv6=[] ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" HandleID="k8s-pod-network.70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.428558 containerd[1804]: 2025-07-07 00:11:04.419 [INFO][5240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-mjxn2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0", GenerateName:"calico-apiserver-898df8c5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c89e35d-ce3a-44df-8fe5-08a58c4b851d", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"898df8c5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"", Pod:"calico-apiserver-898df8c5d-mjxn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc0a1cd1f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:04.428558 containerd[1804]: 2025-07-07 00:11:04.419 [INFO][5240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.194/32] ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-mjxn2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.428558 containerd[1804]: 2025-07-07 00:11:04.420 [INFO][5240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cc0a1cd1f5 ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-mjxn2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.428558 containerd[1804]: 2025-07-07 00:11:04.421 [INFO][5240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-mjxn2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.428558 containerd[1804]: 2025-07-07 00:11:04.421 [INFO][5240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-mjxn2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0", GenerateName:"calico-apiserver-898df8c5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c89e35d-ce3a-44df-8fe5-08a58c4b851d", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"898df8c5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22", Pod:"calico-apiserver-898df8c5d-mjxn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc0a1cd1f5", MAC:"ca:cb:c1:61:80:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:04.428558 containerd[1804]: 2025-07-07 00:11:04.426 [INFO][5240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-mjxn2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:04.435144 kubelet[3067]: I0707 00:11:04.435107 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79c575bd5b-6w59f" podStartSLOduration=1.374180696 podStartE2EDuration="5.435095323s" podCreationTimestamp="2025-07-07 00:10:59 +0000 UTC" firstStartedPulling="2025-07-07 00:10:59.952835917 +0000 UTC m=+31.750872161" lastFinishedPulling="2025-07-07 00:11:04.013750546 +0000 UTC m=+35.811786788" observedRunningTime="2025-07-07 00:11:04.434824606 +0000 UTC m=+36.232860849" watchObservedRunningTime="2025-07-07 00:11:04.435095323 +0000 UTC m=+36.233131562" Jul 7 00:11:04.436529 containerd[1804]: time="2025-07-07T00:11:04.436304134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:04.436529 containerd[1804]: time="2025-07-07T00:11:04.436517701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:04.436529 containerd[1804]: time="2025-07-07T00:11:04.436526594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:04.436647 containerd[1804]: time="2025-07-07T00:11:04.436591055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:04.463473 systemd[1]: Started cri-containerd-70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22.scope - libcontainer container 70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22. Jul 7 00:11:04.489058 containerd[1804]: time="2025-07-07T00:11:04.489008824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-898df8c5d-mjxn2,Uid:2c89e35d-ce3a-44df-8fe5-08a58c4b851d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22\"" Jul 7 00:11:04.489753 containerd[1804]: time="2025-07-07T00:11:04.489738213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:11:04.564357 systemd-networkd[1606]: calibe689a5acab: Link UP Jul 7 00:11:04.565043 systemd-networkd[1606]: calibe689a5acab: Gained carrier Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.386 [INFO][5236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0 goldmane-768f4c5c69- calico-system c74f75d9-0067-422a-a233-ade5735b2645 887 0 2025-07-07 00:10:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-a-fd0ee851f3 goldmane-768f4c5c69-xjqj5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibe689a5acab [] [] }} ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjqj5" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.387 [INFO][5236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjqj5" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.400 [INFO][5277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" HandleID="k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.400 [INFO][5277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" HandleID="k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000136320), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-fd0ee851f3", "pod":"goldmane-768f4c5c69-xjqj5", "timestamp":"2025-07-07 00:11:04.400218188 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fd0ee851f3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.400 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.419 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.419 [INFO][5277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fd0ee851f3' Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.506 [INFO][5277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.515 [INFO][5277] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.525 [INFO][5277] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.529 [INFO][5277] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.534 [INFO][5277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.534 [INFO][5277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.538 [INFO][5277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84 Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.545 [INFO][5277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.556 [INFO][5277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.195/26] block=192.168.19.192/26 handle="k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.556 [INFO][5277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.195/26] handle="k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.556 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:04.591156 containerd[1804]: 2025-07-07 00:11:04.556 [INFO][5277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.195/26] IPv6=[] ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" HandleID="k8s-pod-network.7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.593212 containerd[1804]: 2025-07-07 00:11:04.560 [INFO][5236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjqj5" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c74f75d9-0067-422a-a233-ade5735b2645", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"", Pod:"goldmane-768f4c5c69-xjqj5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe689a5acab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:04.593212 containerd[1804]: 2025-07-07 00:11:04.561 [INFO][5236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.195/32] ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjqj5" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.593212 containerd[1804]: 2025-07-07 00:11:04.561 [INFO][5236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe689a5acab ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjqj5" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.593212 containerd[1804]: 2025-07-07 00:11:04.565 [INFO][5236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjqj5" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.593212 containerd[1804]: 2025-07-07 00:11:04.568 [INFO][5236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjqj5" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c74f75d9-0067-422a-a233-ade5735b2645", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84", Pod:"goldmane-768f4c5c69-xjqj5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe689a5acab", MAC:"5a:59:f3:cf:29:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:04.593212 containerd[1804]: 2025-07-07 00:11:04.587 [INFO][5236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84" Namespace="calico-system" Pod="goldmane-768f4c5c69-xjqj5" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:04.605439 containerd[1804]: time="2025-07-07T00:11:04.605208386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:04.605439 containerd[1804]: time="2025-07-07T00:11:04.605426477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:04.605439 containerd[1804]: time="2025-07-07T00:11:04.605434447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:04.605555 containerd[1804]: time="2025-07-07T00:11:04.605477294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:04.630422 systemd[1]: Started cri-containerd-7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84.scope - libcontainer container 7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84. Jul 7 00:11:04.654490 containerd[1804]: time="2025-07-07T00:11:04.654467628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xjqj5,Uid:c74f75d9-0067-422a-a233-ade5735b2645,Namespace:calico-system,Attempt:1,} returns sandbox id \"7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84\"" Jul 7 00:11:04.789821 systemd[1]: run-netns-cni\x2db266447a\x2d25d5\x2dc5fd\x2db5b2\x2d9be045926f52.mount: Deactivated successfully. Jul 7 00:11:04.790057 systemd[1]: run-netns-cni\x2d9ea76612\x2d7add\x2d2c73\x2d9191\x2d23f0a70100d2.mount: Deactivated successfully. Jul 7 00:11:05.299713 containerd[1804]: time="2025-07-07T00:11:05.299649987Z" level=info msg="StopPodSandbox for \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\"" Jul 7 00:11:05.299942 containerd[1804]: time="2025-07-07T00:11:05.299649994Z" level=info msg="StopPodSandbox for \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\"" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.330 [INFO][5434] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.330 [INFO][5434] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" iface="eth0" netns="/var/run/netns/cni-1a1b71bd-d3fd-fbc8-d8ce-0b665e8cb031" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.330 [INFO][5434] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" iface="eth0" netns="/var/run/netns/cni-1a1b71bd-d3fd-fbc8-d8ce-0b665e8cb031" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.330 [INFO][5434] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" iface="eth0" netns="/var/run/netns/cni-1a1b71bd-d3fd-fbc8-d8ce-0b665e8cb031" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.331 [INFO][5434] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.331 [INFO][5434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.347 [INFO][5469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.347 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.347 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.352 [WARNING][5469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.352 [INFO][5469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.353 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:05.356041 containerd[1804]: 2025-07-07 00:11:05.355 [INFO][5434] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:05.356777 containerd[1804]: time="2025-07-07T00:11:05.356167831Z" level=info msg="TearDown network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\" successfully" Jul 7 00:11:05.356777 containerd[1804]: time="2025-07-07T00:11:05.356203902Z" level=info msg="StopPodSandbox for \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\" returns successfully" Jul 7 00:11:05.356777 containerd[1804]: time="2025-07-07T00:11:05.356766041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4xs8,Uid:8ff5f712-5346-4f8c-8f1a-94e4806cd738,Namespace:kube-system,Attempt:1,}" Jul 7 00:11:05.358357 systemd[1]: run-netns-cni\x2d1a1b71bd\x2dd3fd\x2dfbc8\x2dd8ce\x2d0b665e8cb031.mount: Deactivated successfully. Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.330 [INFO][5433] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.330 [INFO][5433] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" iface="eth0" netns="/var/run/netns/cni-4f7af4b9-5f95-52ab-d4dc-2215ca519570" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.330 [INFO][5433] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" iface="eth0" netns="/var/run/netns/cni-4f7af4b9-5f95-52ab-d4dc-2215ca519570" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.331 [INFO][5433] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" iface="eth0" netns="/var/run/netns/cni-4f7af4b9-5f95-52ab-d4dc-2215ca519570" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.331 [INFO][5433] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.331 [INFO][5433] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.347 [INFO][5468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.347 [INFO][5468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.353 [INFO][5468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.359 [WARNING][5468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.359 [INFO][5468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.361 [INFO][5468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:05.363117 containerd[1804]: 2025-07-07 00:11:05.361 [INFO][5433] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:05.363554 containerd[1804]: time="2025-07-07T00:11:05.363185240Z" level=info msg="TearDown network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\" successfully" Jul 7 00:11:05.363554 containerd[1804]: time="2025-07-07T00:11:05.363203652Z" level=info msg="StopPodSandbox for \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\" returns successfully" Jul 7 00:11:05.363617 containerd[1804]: time="2025-07-07T00:11:05.363587993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c6d9946c-d62bp,Uid:02f7ed80-c14c-4630-a553-ace08c648a2b,Namespace:calico-system,Attempt:1,}" Jul 7 00:11:05.365978 systemd[1]: run-netns-cni\x2d4f7af4b9\x2d5f95\x2d52ab\x2dd4dc\x2d2215ca519570.mount: Deactivated successfully. Jul 7 00:11:05.413014 systemd-networkd[1606]: calid7f9e0ed048: Link UP Jul 7 00:11:05.413155 systemd-networkd[1606]: calid7f9e0ed048: Gained carrier Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.379 [INFO][5499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0 coredns-668d6bf9bc- kube-system 8ff5f712-5346-4f8c-8f1a-94e4806cd738 905 0 2025-07-07 00:10:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-fd0ee851f3 coredns-668d6bf9bc-t4xs8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7f9e0ed048 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4xs8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.379 [INFO][5499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4xs8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.391 [INFO][5542] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" HandleID="k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.392 [INFO][5542] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" HandleID="k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139630), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-fd0ee851f3", "pod":"coredns-668d6bf9bc-t4xs8", "timestamp":"2025-07-07 00:11:05.391931155 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fd0ee851f3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.392 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.392 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.392 [INFO][5542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fd0ee851f3' Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.396 [INFO][5542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.399 [INFO][5542] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.402 [INFO][5542] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.403 [INFO][5542] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.405 [INFO][5542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.405 [INFO][5542] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.406 [INFO][5542] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.408 [INFO][5542] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.411 [INFO][5542] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.196/26] block=192.168.19.192/26 handle="k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.411 [INFO][5542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.196/26] handle="k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.411 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:05.418545 containerd[1804]: 2025-07-07 00:11:05.411 [INFO][5542] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.196/26] IPv6=[] ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" HandleID="k8s-pod-network.7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.419113 containerd[1804]: 2025-07-07 00:11:05.412 [INFO][5499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4xs8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8ff5f712-5346-4f8c-8f1a-94e4806cd738", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"", Pod:"coredns-668d6bf9bc-t4xs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7f9e0ed048", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:05.419113 containerd[1804]: 2025-07-07 00:11:05.412 [INFO][5499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.196/32] ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4xs8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.419113 containerd[1804]: 2025-07-07 00:11:05.412 [INFO][5499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7f9e0ed048 ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4xs8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.419113 containerd[1804]: 2025-07-07 00:11:05.413 [INFO][5499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4xs8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.419113 containerd[1804]: 2025-07-07 00:11:05.413 [INFO][5499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4xs8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8ff5f712-5346-4f8c-8f1a-94e4806cd738", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc", Pod:"coredns-668d6bf9bc-t4xs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7f9e0ed048", MAC:"fe:fa:70:e0:03:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:05.419113 containerd[1804]: 2025-07-07 00:11:05.417 [INFO][5499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4xs8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:05.426827 containerd[1804]: time="2025-07-07T00:11:05.426787316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:05.426827 containerd[1804]: time="2025-07-07T00:11:05.426816663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:05.426827 containerd[1804]: time="2025-07-07T00:11:05.426823821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:05.426964 containerd[1804]: time="2025-07-07T00:11:05.426862520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:05.456254 systemd[1]: Started cri-containerd-7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc.scope - libcontainer container 7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc. Jul 7 00:11:05.478286 containerd[1804]: time="2025-07-07T00:11:05.478237181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4xs8,Uid:8ff5f712-5346-4f8c-8f1a-94e4806cd738,Namespace:kube-system,Attempt:1,} returns sandbox id \"7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc\"" Jul 7 00:11:05.479323 containerd[1804]: time="2025-07-07T00:11:05.479309662Z" level=info msg="CreateContainer within sandbox \"7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:11:05.483868 containerd[1804]: time="2025-07-07T00:11:05.483826341Z" level=info msg="CreateContainer within sandbox \"7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c3bce00ac8b075590ebd8efc2071f1cecff3d7a01acfc6caca381fd450ac220\"" Jul 7 00:11:05.484024 containerd[1804]: time="2025-07-07T00:11:05.484013026Z" level=info msg="StartContainer for \"7c3bce00ac8b075590ebd8efc2071f1cecff3d7a01acfc6caca381fd450ac220\"" Jul 7 00:11:05.500254 systemd[1]: Started cri-containerd-7c3bce00ac8b075590ebd8efc2071f1cecff3d7a01acfc6caca381fd450ac220.scope - libcontainer container 7c3bce00ac8b075590ebd8efc2071f1cecff3d7a01acfc6caca381fd450ac220. Jul 7 00:11:05.516458 containerd[1804]: time="2025-07-07T00:11:05.516426078Z" level=info msg="StartContainer for \"7c3bce00ac8b075590ebd8efc2071f1cecff3d7a01acfc6caca381fd450ac220\" returns successfully" Jul 7 00:11:05.517885 systemd-networkd[1606]: cali8801902be33: Link UP Jul 7 00:11:05.518140 systemd-networkd[1606]: cali8801902be33: Gained carrier Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.384 [INFO][5513] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0 calico-kube-controllers-57c6d9946c- calico-system 02f7ed80-c14c-4630-a553-ace08c648a2b 906 0 2025-07-07 00:10:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57c6d9946c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-a-fd0ee851f3 calico-kube-controllers-57c6d9946c-d62bp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8801902be33 [] [] }} ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Namespace="calico-system" Pod="calico-kube-controllers-57c6d9946c-d62bp" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.384 [INFO][5513] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Namespace="calico-system" Pod="calico-kube-controllers-57c6d9946c-d62bp" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.396 [INFO][5557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" HandleID="k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.396 [INFO][5557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" HandleID="k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-fd0ee851f3", "pod":"calico-kube-controllers-57c6d9946c-d62bp", "timestamp":"2025-07-07 00:11:05.396700469 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fd0ee851f3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.396 [INFO][5557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.411 [INFO][5557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.411 [INFO][5557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fd0ee851f3' Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.498 [INFO][5557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.501 [INFO][5557] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.504 [INFO][5557] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.505 [INFO][5557] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.507 [INFO][5557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.507 [INFO][5557] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.508 [INFO][5557] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.511 [INFO][5557] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.515 [INFO][5557] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.197/26] block=192.168.19.192/26 handle="k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.515 [INFO][5557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.197/26] handle="k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.515 [INFO][5557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:05.525868 containerd[1804]: 2025-07-07 00:11:05.515 [INFO][5557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.197/26] IPv6=[] ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" HandleID="k8s-pod-network.f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.526409 containerd[1804]: 2025-07-07 00:11:05.516 [INFO][5513] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Namespace="calico-system" Pod="calico-kube-controllers-57c6d9946c-d62bp" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0", GenerateName:"calico-kube-controllers-57c6d9946c-", Namespace:"calico-system", SelfLink:"", UID:"02f7ed80-c14c-4630-a553-ace08c648a2b", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c6d9946c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"", Pod:"calico-kube-controllers-57c6d9946c-d62bp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8801902be33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:05.526409 containerd[1804]: 2025-07-07 00:11:05.516 [INFO][5513] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.197/32] ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Namespace="calico-system" Pod="calico-kube-controllers-57c6d9946c-d62bp" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.526409 containerd[1804]: 2025-07-07 00:11:05.516 [INFO][5513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8801902be33 ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Namespace="calico-system" Pod="calico-kube-controllers-57c6d9946c-d62bp" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.526409 containerd[1804]: 2025-07-07 00:11:05.518 [INFO][5513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Namespace="calico-system" Pod="calico-kube-controllers-57c6d9946c-d62bp" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.526409 containerd[1804]: 2025-07-07 00:11:05.518 [INFO][5513] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Namespace="calico-system" Pod="calico-kube-controllers-57c6d9946c-d62bp" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0", GenerateName:"calico-kube-controllers-57c6d9946c-", Namespace:"calico-system", SelfLink:"", UID:"02f7ed80-c14c-4630-a553-ace08c648a2b", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c6d9946c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f", Pod:"calico-kube-controllers-57c6d9946c-d62bp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8801902be33", MAC:"e2:da:25:75:d3:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:05.526409 containerd[1804]: 2025-07-07 00:11:05.524 [INFO][5513] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f" Namespace="calico-system" Pod="calico-kube-controllers-57c6d9946c-d62bp" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:05.535265 containerd[1804]: time="2025-07-07T00:11:05.535021817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:05.535265 containerd[1804]: time="2025-07-07T00:11:05.535228183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:05.535265 containerd[1804]: time="2025-07-07T00:11:05.535236330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:05.535385 containerd[1804]: time="2025-07-07T00:11:05.535278677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:05.560430 systemd[1]: Started cri-containerd-f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f.scope - libcontainer container f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f. Jul 7 00:11:05.583293 containerd[1804]: time="2025-07-07T00:11:05.583267992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c6d9946c-d62bp,Uid:02f7ed80-c14c-4630-a553-ace08c648a2b,Namespace:calico-system,Attempt:1,} returns sandbox id \"f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f\"" Jul 7 00:11:06.312880 containerd[1804]: time="2025-07-07T00:11:06.312794114Z" level=info msg="StopPodSandbox for \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\"" Jul 7 00:11:06.312880 containerd[1804]: time="2025-07-07T00:11:06.312844548Z" level=info msg="StopPodSandbox for \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\"" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.340 [INFO][5747] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.340 [INFO][5747] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" iface="eth0" netns="/var/run/netns/cni-baaa4cd1-6051-a2b3-f461-0f460e9a45a8" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.340 [INFO][5747] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" iface="eth0" netns="/var/run/netns/cni-baaa4cd1-6051-a2b3-f461-0f460e9a45a8" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.341 [INFO][5747] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" iface="eth0" netns="/var/run/netns/cni-baaa4cd1-6051-a2b3-f461-0f460e9a45a8" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.341 [INFO][5747] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.341 [INFO][5747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.350 [INFO][5780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.350 [INFO][5780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.351 [INFO][5780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.354 [WARNING][5780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.354 [INFO][5780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.355 [INFO][5780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:06.356552 containerd[1804]: 2025-07-07 00:11:06.355 [INFO][5747] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:06.356837 containerd[1804]: time="2025-07-07T00:11:06.356593669Z" level=info msg="TearDown network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\" successfully" Jul 7 00:11:06.356837 containerd[1804]: time="2025-07-07T00:11:06.356610659Z" level=info msg="StopPodSandbox for \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\" returns successfully" Jul 7 00:11:06.357046 containerd[1804]: time="2025-07-07T00:11:06.357029075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vkrn,Uid:f702355d-417f-4caf-86a7-f40f67775a26,Namespace:kube-system,Attempt:1,}" Jul 7 00:11:06.358295 systemd[1]: run-netns-cni\x2dbaaa4cd1\x2d6051\x2da2b3\x2df461\x2d0f460e9a45a8.mount: Deactivated successfully. Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.340 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.341 [INFO][5748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" iface="eth0" netns="/var/run/netns/cni-1b9ee560-f3fb-3734-55a1-3f0b5c264635" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.341 [INFO][5748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" iface="eth0" netns="/var/run/netns/cni-1b9ee560-f3fb-3734-55a1-3f0b5c264635" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.341 [INFO][5748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" iface="eth0" netns="/var/run/netns/cni-1b9ee560-f3fb-3734-55a1-3f0b5c264635" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.341 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.341 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.351 [INFO][5782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.351 [INFO][5782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.355 [INFO][5782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.358 [WARNING][5782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.358 [INFO][5782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.358 [INFO][5782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:06.360643 containerd[1804]: 2025-07-07 00:11:06.359 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:06.360978 containerd[1804]: time="2025-07-07T00:11:06.360783214Z" level=info msg="TearDown network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\" successfully" Jul 7 00:11:06.360978 containerd[1804]: time="2025-07-07T00:11:06.360795223Z" level=info msg="StopPodSandbox for \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\" returns successfully" Jul 7 00:11:06.361099 containerd[1804]: time="2025-07-07T00:11:06.361089253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvdv8,Uid:9c39c50a-eb2f-499c-b38e-71339392cd68,Namespace:calico-system,Attempt:1,}" Jul 7 00:11:06.362271 systemd[1]: run-netns-cni\x2d1b9ee560\x2df3fb\x2d3734\x2d55a1\x2d3f0b5c264635.mount: Deactivated successfully. Jul 7 00:11:06.411282 systemd-networkd[1606]: cali3cc0a1cd1f5: Gained IPv6LL Jul 7 00:11:06.411471 systemd-networkd[1606]: calibe689a5acab: Gained IPv6LL Jul 7 00:11:06.415382 systemd-networkd[1606]: califb9be6cb51a: Link UP Jul 7 00:11:06.415767 systemd-networkd[1606]: califb9be6cb51a: Gained carrier Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.380 [INFO][5811] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0 coredns-668d6bf9bc- kube-system f702355d-417f-4caf-86a7-f40f67775a26 921 0 2025-07-07 00:10:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-a-fd0ee851f3 coredns-668d6bf9bc-2vkrn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califb9be6cb51a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vkrn" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.380 [INFO][5811] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vkrn" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.392 [INFO][5858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" HandleID="k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.392 [INFO][5858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" HandleID="k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139500), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-a-fd0ee851f3", "pod":"coredns-668d6bf9bc-2vkrn", "timestamp":"2025-07-07 00:11:06.392745883 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fd0ee851f3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.392 [INFO][5858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.392 [INFO][5858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.392 [INFO][5858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fd0ee851f3' Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.397 [INFO][5858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.400 [INFO][5858] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.403 [INFO][5858] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.405 [INFO][5858] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.406 [INFO][5858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.406 [INFO][5858] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.407 [INFO][5858] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.410 [INFO][5858] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.413 [INFO][5858] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.198/26] block=192.168.19.192/26 handle="k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.413 [INFO][5858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.198/26] handle="k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.413 [INFO][5858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:06.429788 containerd[1804]: 2025-07-07 00:11:06.413 [INFO][5858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.198/26] IPv6=[] ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" HandleID="k8s-pod-network.7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.430496 containerd[1804]: 2025-07-07 00:11:06.414 [INFO][5811] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vkrn" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f702355d-417f-4caf-86a7-f40f67775a26", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"", Pod:"coredns-668d6bf9bc-2vkrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb9be6cb51a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:06.430496 containerd[1804]: 2025-07-07 00:11:06.414 [INFO][5811] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.198/32] ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vkrn" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.430496 containerd[1804]: 2025-07-07 00:11:06.414 [INFO][5811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb9be6cb51a ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vkrn" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.430496 containerd[1804]: 2025-07-07 00:11:06.415 [INFO][5811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vkrn" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.430496 containerd[1804]: 2025-07-07 00:11:06.416 [INFO][5811] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vkrn" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f702355d-417f-4caf-86a7-f40f67775a26", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db", Pod:"coredns-668d6bf9bc-2vkrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb9be6cb51a", MAC:"a2:bd:14:da:63:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:06.430496 containerd[1804]: 2025-07-07 00:11:06.428 [INFO][5811] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vkrn" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:06.439118 containerd[1804]: time="2025-07-07T00:11:06.438881574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:06.439118 containerd[1804]: time="2025-07-07T00:11:06.439093906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:06.439118 containerd[1804]: time="2025-07-07T00:11:06.439101879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:06.439335 containerd[1804]: time="2025-07-07T00:11:06.439217811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:06.439622 kubelet[3067]: I0707 00:11:06.439589 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t4xs8" podStartSLOduration=33.439576447 podStartE2EDuration="33.439576447s" podCreationTimestamp="2025-07-07 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:11:06.439411276 +0000 UTC m=+38.237447520" watchObservedRunningTime="2025-07-07 00:11:06.439576447 +0000 UTC m=+38.237612687" Jul 7 00:11:06.462301 systemd[1]: Started cri-containerd-7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db.scope - libcontainer container 7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db. Jul 7 00:11:06.484359 containerd[1804]: time="2025-07-07T00:11:06.484339023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vkrn,Uid:f702355d-417f-4caf-86a7-f40f67775a26,Namespace:kube-system,Attempt:1,} returns sandbox id \"7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db\"" Jul 7 00:11:06.485794 containerd[1804]: time="2025-07-07T00:11:06.485778981Z" level=info msg="CreateContainer within sandbox \"7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:11:06.490290 containerd[1804]: time="2025-07-07T00:11:06.490271390Z" level=info msg="CreateContainer within sandbox \"7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3839e6f01c1216c692e80ac4fa6bedf99e87f5ebc357c0c80ddd075fb5b61ff1\"" Jul 7 00:11:06.490551 containerd[1804]: time="2025-07-07T00:11:06.490529767Z" level=info msg="StartContainer for \"3839e6f01c1216c692e80ac4fa6bedf99e87f5ebc357c0c80ddd075fb5b61ff1\"" Jul 7 00:11:06.512548 systemd-networkd[1606]: calif225666af39: Link UP Jul 7 00:11:06.512654 systemd-networkd[1606]: calif225666af39: Gained carrier Jul 7 00:11:06.513284 systemd[1]: Started cri-containerd-3839e6f01c1216c692e80ac4fa6bedf99e87f5ebc357c0c80ddd075fb5b61ff1.scope - libcontainer container 3839e6f01c1216c692e80ac4fa6bedf99e87f5ebc357c0c80ddd075fb5b61ff1. Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.381 [INFO][5821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0 csi-node-driver- calico-system 9c39c50a-eb2f-499c-b38e-71339392cd68 922 0 2025-07-07 00:10:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-a-fd0ee851f3 csi-node-driver-mvdv8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif225666af39 [] [] }} ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Namespace="calico-system" Pod="csi-node-driver-mvdv8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.381 [INFO][5821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Namespace="calico-system" Pod="csi-node-driver-mvdv8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.393 [INFO][5862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" HandleID="k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.393 [INFO][5862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" HandleID="k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026f7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-a-fd0ee851f3", "pod":"csi-node-driver-mvdv8", "timestamp":"2025-07-07 00:11:06.393057248 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fd0ee851f3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.393 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.413 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.413 [INFO][5862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fd0ee851f3' Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.498 [INFO][5862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.501 [INFO][5862] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.503 [INFO][5862] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.504 [INFO][5862] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.505 [INFO][5862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.505 [INFO][5862] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.506 [INFO][5862] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717 Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.507 [INFO][5862] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.510 [INFO][5862] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.199/26] block=192.168.19.192/26 handle="k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.510 [INFO][5862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.199/26] handle="k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.510 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:06.518724 containerd[1804]: 2025-07-07 00:11:06.510 [INFO][5862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.199/26] IPv6=[] ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" HandleID="k8s-pod-network.eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.519343 containerd[1804]: 2025-07-07 00:11:06.511 [INFO][5821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Namespace="calico-system" Pod="csi-node-driver-mvdv8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c39c50a-eb2f-499c-b38e-71339392cd68", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"", Pod:"csi-node-driver-mvdv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif225666af39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:06.519343 containerd[1804]: 2025-07-07 00:11:06.511 [INFO][5821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.199/32] ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Namespace="calico-system" Pod="csi-node-driver-mvdv8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.519343 containerd[1804]: 2025-07-07 00:11:06.511 [INFO][5821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif225666af39 ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Namespace="calico-system" Pod="csi-node-driver-mvdv8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.519343 containerd[1804]: 2025-07-07 00:11:06.512 [INFO][5821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Namespace="calico-system" Pod="csi-node-driver-mvdv8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.519343 containerd[1804]: 2025-07-07 00:11:06.512 [INFO][5821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Namespace="calico-system" Pod="csi-node-driver-mvdv8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c39c50a-eb2f-499c-b38e-71339392cd68", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717", Pod:"csi-node-driver-mvdv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif225666af39", MAC:"f6:27:1a:2a:27:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:06.519343 containerd[1804]: 2025-07-07 00:11:06.517 [INFO][5821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717" Namespace="calico-system" Pod="csi-node-driver-mvdv8" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:06.525537 containerd[1804]: time="2025-07-07T00:11:06.525516304Z" level=info msg="StartContainer for \"3839e6f01c1216c692e80ac4fa6bedf99e87f5ebc357c0c80ddd075fb5b61ff1\" returns successfully" Jul 7 00:11:06.527497 containerd[1804]: time="2025-07-07T00:11:06.527453428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:06.527497 containerd[1804]: time="2025-07-07T00:11:06.527486927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:06.527598 containerd[1804]: time="2025-07-07T00:11:06.527495345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:06.527598 containerd[1804]: time="2025-07-07T00:11:06.527550109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:06.543365 systemd[1]: Started cri-containerd-eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717.scope - libcontainer container eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717. Jul 7 00:11:06.553503 containerd[1804]: time="2025-07-07T00:11:06.553481299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvdv8,Uid:9c39c50a-eb2f-499c-b38e-71339392cd68,Namespace:calico-system,Attempt:1,} returns sandbox id \"eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717\"" Jul 7 00:11:07.115204 systemd-networkd[1606]: calid7f9e0ed048: Gained IPv6LL Jul 7 00:11:07.178279 systemd-networkd[1606]: cali8801902be33: Gained IPv6LL Jul 7 00:11:07.442737 kubelet[3067]: I0707 00:11:07.442659 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2vkrn" podStartSLOduration=34.442643162 podStartE2EDuration="34.442643162s" podCreationTimestamp="2025-07-07 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:11:07.442488328 +0000 UTC m=+39.240524571" watchObservedRunningTime="2025-07-07 00:11:07.442643162 +0000 UTC m=+39.240679414" Jul 7 00:11:07.443529 containerd[1804]: time="2025-07-07T00:11:07.443510235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:07.443673 containerd[1804]: time="2025-07-07T00:11:07.443613036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:11:07.444013 containerd[1804]: time="2025-07-07T00:11:07.443999043Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:07.445226 containerd[1804]: time="2025-07-07T00:11:07.445210374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:07.445672 containerd[1804]: time="2025-07-07T00:11:07.445658510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.955901779s" Jul 7 00:11:07.445702 containerd[1804]: time="2025-07-07T00:11:07.445676083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:11:07.446193 containerd[1804]: time="2025-07-07T00:11:07.446182349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:11:07.446719 containerd[1804]: time="2025-07-07T00:11:07.446706169Z" level=info msg="CreateContainer within sandbox \"70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:11:07.467829 containerd[1804]: time="2025-07-07T00:11:07.467791753Z" level=info msg="CreateContainer within sandbox \"70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"24de683ddf20673668a2d043d6e8edfe86dca549e42c0472577913efc10a0649\"" Jul 7 00:11:07.468074 containerd[1804]: time="2025-07-07T00:11:07.468062654Z" level=info msg="StartContainer for \"24de683ddf20673668a2d043d6e8edfe86dca549e42c0472577913efc10a0649\"" Jul 7 00:11:07.490312 systemd[1]: Started cri-containerd-24de683ddf20673668a2d043d6e8edfe86dca549e42c0472577913efc10a0649.scope - libcontainer container 24de683ddf20673668a2d043d6e8edfe86dca549e42c0472577913efc10a0649. Jul 7 00:11:07.514544 containerd[1804]: time="2025-07-07T00:11:07.514520860Z" level=info msg="StartContainer for \"24de683ddf20673668a2d043d6e8edfe86dca549e42c0472577913efc10a0649\" returns successfully" Jul 7 00:11:07.883769 systemd-networkd[1606]: calif225666af39: Gained IPv6LL Jul 7 00:11:08.301844 containerd[1804]: time="2025-07-07T00:11:08.301751127Z" level=info msg="StopPodSandbox for \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\"" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.367 [INFO][6119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.368 [INFO][6119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" iface="eth0" netns="/var/run/netns/cni-e1b37d3c-728b-7657-9380-23f121fe8664" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.368 [INFO][6119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" iface="eth0" netns="/var/run/netns/cni-e1b37d3c-728b-7657-9380-23f121fe8664" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.368 [INFO][6119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" iface="eth0" netns="/var/run/netns/cni-e1b37d3c-728b-7657-9380-23f121fe8664" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.368 [INFO][6119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.368 [INFO][6119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.381 [INFO][6136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.381 [INFO][6136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.381 [INFO][6136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.386 [WARNING][6136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.386 [INFO][6136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.387 [INFO][6136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:08.389170 containerd[1804]: 2025-07-07 00:11:08.388 [INFO][6119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:08.389607 containerd[1804]: time="2025-07-07T00:11:08.389242075Z" level=info msg="TearDown network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\" successfully" Jul 7 00:11:08.389607 containerd[1804]: time="2025-07-07T00:11:08.389259034Z" level=info msg="StopPodSandbox for \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\" returns successfully" Jul 7 00:11:08.389607 containerd[1804]: time="2025-07-07T00:11:08.389589986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-898df8c5d-x44jj,Uid:a5989df0-3d41-4e07-823a-56249763eb4e,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:11:08.390911 systemd[1]: run-netns-cni\x2de1b37d3c\x2d728b\x2d7657\x2d9380\x2d23f121fe8664.mount: Deactivated successfully. Jul 7 00:11:08.396194 systemd-networkd[1606]: califb9be6cb51a: Gained IPv6LL Jul 7 00:11:08.441616 systemd-networkd[1606]: calid65c909bea3: Link UP Jul 7 00:11:08.441786 systemd-networkd[1606]: calid65c909bea3: Gained carrier Jul 7 00:11:08.446430 kubelet[3067]: I0707 00:11:08.446381 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-898df8c5d-mjxn2" podStartSLOduration=25.489860924 podStartE2EDuration="28.446361775s" podCreationTimestamp="2025-07-07 00:10:40 +0000 UTC" firstStartedPulling="2025-07-07 00:11:04.489609511 +0000 UTC m=+36.287645755" lastFinishedPulling="2025-07-07 00:11:07.446110357 +0000 UTC m=+39.244146606" observedRunningTime="2025-07-07 00:11:08.442512953 +0000 UTC m=+40.240549196" watchObservedRunningTime="2025-07-07 00:11:08.446361775 +0000 UTC m=+40.244398015" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.409 [INFO][6158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0 calico-apiserver-898df8c5d- calico-apiserver a5989df0-3d41-4e07-823a-56249763eb4e 952 0 2025-07-07 00:10:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:898df8c5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-a-fd0ee851f3 calico-apiserver-898df8c5d-x44jj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid65c909bea3 [] [] }} ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-x44jj" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.409 [INFO][6158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-x44jj" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.421 [INFO][6177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" HandleID="k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.421 [INFO][6177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" HandleID="k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-a-fd0ee851f3", "pod":"calico-apiserver-898df8c5d-x44jj", "timestamp":"2025-07-07 00:11:08.421900413 +0000 UTC"}, Hostname:"ci-4081.3.4-a-fd0ee851f3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.422 [INFO][6177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.422 [INFO][6177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.422 [INFO][6177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-a-fd0ee851f3' Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.426 [INFO][6177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.429 [INFO][6177] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.431 [INFO][6177] ipam/ipam.go 511: Trying affinity for 192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.432 [INFO][6177] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.433 [INFO][6177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.433 [INFO][6177] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.434 [INFO][6177] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.436 [INFO][6177] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.439 [INFO][6177] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.19.200/26] block=192.168.19.192/26 handle="k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.439 [INFO][6177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.200/26] handle="k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" host="ci-4081.3.4-a-fd0ee851f3" Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.439 [INFO][6177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:08.447505 containerd[1804]: 2025-07-07 00:11:08.439 [INFO][6177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.200/26] IPv6=[] ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" HandleID="k8s-pod-network.159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.448271 containerd[1804]: 2025-07-07 00:11:08.440 [INFO][6158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-x44jj" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0", GenerateName:"calico-apiserver-898df8c5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5989df0-3d41-4e07-823a-56249763eb4e", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"898df8c5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"", Pod:"calico-apiserver-898df8c5d-x44jj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid65c909bea3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:08.448271 containerd[1804]: 2025-07-07 00:11:08.440 [INFO][6158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.200/32] ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-x44jj" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.448271 containerd[1804]: 2025-07-07 00:11:08.440 [INFO][6158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid65c909bea3 ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-x44jj" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.448271 containerd[1804]: 2025-07-07 00:11:08.441 [INFO][6158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-x44jj" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.448271 containerd[1804]: 2025-07-07 00:11:08.442 [INFO][6158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-x44jj" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0", GenerateName:"calico-apiserver-898df8c5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5989df0-3d41-4e07-823a-56249763eb4e", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"898df8c5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e", Pod:"calico-apiserver-898df8c5d-x44jj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid65c909bea3", MAC:"de:df:6d:c8:2b:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:08.448271 containerd[1804]: 2025-07-07 00:11:08.446 [INFO][6158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e" Namespace="calico-apiserver" Pod="calico-apiserver-898df8c5d-x44jj" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:08.456268 containerd[1804]: time="2025-07-07T00:11:08.456042661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:11:08.456268 containerd[1804]: time="2025-07-07T00:11:08.456259076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:11:08.456268 containerd[1804]: time="2025-07-07T00:11:08.456267170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:08.456380 containerd[1804]: time="2025-07-07T00:11:08.456306615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:11:08.479382 systemd[1]: Started cri-containerd-159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e.scope - libcontainer container 159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e. Jul 7 00:11:08.505229 containerd[1804]: time="2025-07-07T00:11:08.505206053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-898df8c5d-x44jj,Uid:a5989df0-3d41-4e07-823a-56249763eb4e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e\"" Jul 7 00:11:08.506494 containerd[1804]: time="2025-07-07T00:11:08.506478182Z" level=info msg="CreateContainer within sandbox \"159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:11:08.510692 containerd[1804]: time="2025-07-07T00:11:08.510678710Z" level=info msg="CreateContainer within sandbox \"159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc3e54cb96550e912cfd4e732b99e6abd9943ab1bb74941c8be30d7bc3462037\"" Jul 7 00:11:08.510891 containerd[1804]: time="2025-07-07T00:11:08.510880214Z" level=info msg="StartContainer for \"cc3e54cb96550e912cfd4e732b99e6abd9943ab1bb74941c8be30d7bc3462037\"" Jul 7 00:11:08.532559 systemd[1]: Started cri-containerd-cc3e54cb96550e912cfd4e732b99e6abd9943ab1bb74941c8be30d7bc3462037.scope - libcontainer container cc3e54cb96550e912cfd4e732b99e6abd9943ab1bb74941c8be30d7bc3462037. Jul 7 00:11:08.621457 containerd[1804]: time="2025-07-07T00:11:08.621346941Z" level=info msg="StartContainer for \"cc3e54cb96550e912cfd4e732b99e6abd9943ab1bb74941c8be30d7bc3462037\" returns successfully" Jul 7 00:11:09.453728 kubelet[3067]: I0707 00:11:09.453644 3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:11:09.479427 kubelet[3067]: I0707 00:11:09.479385 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-898df8c5d-x44jj" podStartSLOduration=29.479370013 podStartE2EDuration="29.479370013s" podCreationTimestamp="2025-07-07 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:11:09.479311689 +0000 UTC m=+41.277347932" watchObservedRunningTime="2025-07-07 00:11:09.479370013 +0000 UTC m=+41.277406256" Jul 7 00:11:10.250213 systemd-networkd[1606]: calid65c909bea3: Gained IPv6LL Jul 7 00:11:10.280085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569445408.mount: Deactivated successfully. Jul 7 00:11:10.454840 kubelet[3067]: I0707 00:11:10.454824 3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:11:10.492810 containerd[1804]: time="2025-07-07T00:11:10.492786647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:10.493037 containerd[1804]: time="2025-07-07T00:11:10.493013939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:11:10.493295 containerd[1804]: time="2025-07-07T00:11:10.493284067Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:10.494850 containerd[1804]: time="2025-07-07T00:11:10.494836112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:10.495185 containerd[1804]: time="2025-07-07T00:11:10.495168645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.048971343s" Jul 7 00:11:10.495231 containerd[1804]: time="2025-07-07T00:11:10.495186690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:11:10.495693 containerd[1804]: time="2025-07-07T00:11:10.495679409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:11:10.496195 containerd[1804]: time="2025-07-07T00:11:10.496183020Z" level=info msg="CreateContainer within sandbox \"7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:11:10.500488 containerd[1804]: time="2025-07-07T00:11:10.500415527Z" level=info msg="CreateContainer within sandbox \"7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a494441c472671ecac582813710561308f8fe73d5fe0485717266be2dbca9241\"" Jul 7 00:11:10.500727 containerd[1804]: time="2025-07-07T00:11:10.500710340Z" level=info msg="StartContainer for \"a494441c472671ecac582813710561308f8fe73d5fe0485717266be2dbca9241\"" Jul 7 00:11:10.523472 systemd[1]: Started cri-containerd-a494441c472671ecac582813710561308f8fe73d5fe0485717266be2dbca9241.scope - libcontainer container a494441c472671ecac582813710561308f8fe73d5fe0485717266be2dbca9241. Jul 7 00:11:10.546377 containerd[1804]: time="2025-07-07T00:11:10.546355564Z" level=info msg="StartContainer for \"a494441c472671ecac582813710561308f8fe73d5fe0485717266be2dbca9241\" returns successfully" Jul 7 00:11:11.464383 kubelet[3067]: I0707 00:11:11.464340 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-xjqj5" podStartSLOduration=23.623721677 podStartE2EDuration="29.464324121s" podCreationTimestamp="2025-07-07 00:10:42 +0000 UTC" firstStartedPulling="2025-07-07 00:11:04.655005003 +0000 UTC m=+36.453041246" lastFinishedPulling="2025-07-07 00:11:10.495607447 +0000 UTC m=+42.293643690" observedRunningTime="2025-07-07 00:11:11.46408977 +0000 UTC m=+43.262126017" watchObservedRunningTime="2025-07-07 00:11:11.464324121 +0000 UTC m=+43.262360363" Jul 7 00:11:12.127510 kubelet[3067]: I0707 00:11:12.127394 3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:11:13.426249 containerd[1804]: time="2025-07-07T00:11:13.426188532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:13.426473 containerd[1804]: time="2025-07-07T00:11:13.426315833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:11:13.426680 containerd[1804]: time="2025-07-07T00:11:13.426667614Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:13.427750 containerd[1804]: time="2025-07-07T00:11:13.427734282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:13.428191 containerd[1804]: time="2025-07-07T00:11:13.428175219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.932477381s" Jul 7 00:11:13.428234 containerd[1804]: time="2025-07-07T00:11:13.428196537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:11:13.428713 containerd[1804]: time="2025-07-07T00:11:13.428702199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:11:13.431654 containerd[1804]: time="2025-07-07T00:11:13.431611366Z" level=info msg="CreateContainer within sandbox \"f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:11:13.436344 containerd[1804]: time="2025-07-07T00:11:13.436320125Z" level=info msg="CreateContainer within sandbox \"f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4528d3d7153076cb656860bff637551f196a571dff568b3c7d56564f36274c21\"" Jul 7 00:11:13.436628 containerd[1804]: time="2025-07-07T00:11:13.436614969Z" level=info msg="StartContainer for \"4528d3d7153076cb656860bff637551f196a571dff568b3c7d56564f36274c21\"" Jul 7 00:11:13.461256 systemd[1]: Started cri-containerd-4528d3d7153076cb656860bff637551f196a571dff568b3c7d56564f36274c21.scope - libcontainer container 4528d3d7153076cb656860bff637551f196a571dff568b3c7d56564f36274c21. Jul 7 00:11:13.484855 containerd[1804]: time="2025-07-07T00:11:13.484832632Z" level=info msg="StartContainer for \"4528d3d7153076cb656860bff637551f196a571dff568b3c7d56564f36274c21\" returns successfully" Jul 7 00:11:14.492265 kubelet[3067]: I0707 00:11:14.492062 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57c6d9946c-d62bp" podStartSLOduration=23.647218721 podStartE2EDuration="31.492011914s" podCreationTimestamp="2025-07-07 00:10:43 +0000 UTC" firstStartedPulling="2025-07-07 00:11:05.583849655 +0000 UTC m=+37.381885898" lastFinishedPulling="2025-07-07 00:11:13.428642849 +0000 UTC m=+45.226679091" observedRunningTime="2025-07-07 00:11:14.4911645 +0000 UTC m=+46.289200857" watchObservedRunningTime="2025-07-07 00:11:14.492011914 +0000 UTC m=+46.290048214" Jul 7 00:11:15.197173 containerd[1804]: time="2025-07-07T00:11:15.197150362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:15.207551 containerd[1804]: time="2025-07-07T00:11:15.197300187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:11:15.207551 containerd[1804]: time="2025-07-07T00:11:15.197762814Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:15.207551 containerd[1804]: time="2025-07-07T00:11:15.198682792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:15.207551 containerd[1804]: time="2025-07-07T00:11:15.199098242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.770380597s" Jul 7 00:11:15.207551 containerd[1804]: time="2025-07-07T00:11:15.199115390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:11:15.207551 containerd[1804]: time="2025-07-07T00:11:15.200032722Z" level=info msg="CreateContainer within sandbox \"eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:11:15.207551 containerd[1804]: time="2025-07-07T00:11:15.205029623Z" level=info msg="CreateContainer within sandbox \"eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c244644719cbe084c9599be1d2e1dab64098045f94a3f79f84b3728273326896\"" Jul 7 00:11:15.207551 containerd[1804]: time="2025-07-07T00:11:15.205282278Z" level=info msg="StartContainer for \"c244644719cbe084c9599be1d2e1dab64098045f94a3f79f84b3728273326896\"" Jul 7 00:11:15.233349 systemd[1]: Started cri-containerd-c244644719cbe084c9599be1d2e1dab64098045f94a3f79f84b3728273326896.scope - libcontainer container c244644719cbe084c9599be1d2e1dab64098045f94a3f79f84b3728273326896. Jul 7 00:11:15.246904 containerd[1804]: time="2025-07-07T00:11:15.246880740Z" level=info msg="StartContainer for \"c244644719cbe084c9599be1d2e1dab64098045f94a3f79f84b3728273326896\" returns successfully" Jul 7 00:11:15.247483 containerd[1804]: time="2025-07-07T00:11:15.247469671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:11:16.937017 containerd[1804]: time="2025-07-07T00:11:16.936991710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:16.937297 containerd[1804]: time="2025-07-07T00:11:16.937259682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:11:16.937645 containerd[1804]: time="2025-07-07T00:11:16.937632170Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:16.938590 containerd[1804]: time="2025-07-07T00:11:16.938575682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:11:16.939040 containerd[1804]: time="2025-07-07T00:11:16.939024423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.691537157s" Jul 7 00:11:16.939084 containerd[1804]: time="2025-07-07T00:11:16.939042779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:11:16.940073 containerd[1804]: time="2025-07-07T00:11:16.940059084Z" level=info msg="CreateContainer within sandbox \"eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:11:16.944644 containerd[1804]: time="2025-07-07T00:11:16.944630006Z" level=info msg="CreateContainer within sandbox \"eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"67f7dfb7f18b2cc7c469338ab01ee03445073d11e73ba0b54a4308f0a5ac3f66\"" Jul 7 00:11:16.944831 containerd[1804]: time="2025-07-07T00:11:16.944816899Z" level=info msg="StartContainer for \"67f7dfb7f18b2cc7c469338ab01ee03445073d11e73ba0b54a4308f0a5ac3f66\"" Jul 7 00:11:16.972255 systemd[1]: Started cri-containerd-67f7dfb7f18b2cc7c469338ab01ee03445073d11e73ba0b54a4308f0a5ac3f66.scope - libcontainer container 67f7dfb7f18b2cc7c469338ab01ee03445073d11e73ba0b54a4308f0a5ac3f66. Jul 7 00:11:16.986472 containerd[1804]: time="2025-07-07T00:11:16.986446172Z" level=info msg="StartContainer for \"67f7dfb7f18b2cc7c469338ab01ee03445073d11e73ba0b54a4308f0a5ac3f66\" returns successfully" Jul 7 00:11:17.350187 kubelet[3067]: I0707 00:11:17.350107 3067 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:11:17.351457 kubelet[3067]: I0707 00:11:17.350226 3067 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:11:17.506386 kubelet[3067]: I0707 00:11:17.506235 3067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mvdv8" podStartSLOduration=24.120755827 podStartE2EDuration="34.506196017s" podCreationTimestamp="2025-07-07 00:10:43 +0000 UTC" firstStartedPulling="2025-07-07 00:11:06.55401855 +0000 UTC m=+38.352054792" lastFinishedPulling="2025-07-07 00:11:16.939458743 +0000 UTC m=+48.737494982" observedRunningTime="2025-07-07 00:11:17.50472512 +0000 UTC m=+49.302761460" watchObservedRunningTime="2025-07-07 00:11:17.506196017 +0000 UTC m=+49.304232310" Jul 7 00:11:28.288408 containerd[1804]: time="2025-07-07T00:11:28.288358056Z" level=info msg="StopPodSandbox for \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\"" Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.309 [WARNING][6704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f702355d-417f-4caf-86a7-f40f67775a26", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db", Pod:"coredns-668d6bf9bc-2vkrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb9be6cb51a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.310 [INFO][6704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.310 [INFO][6704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" iface="eth0" netns="" Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.310 [INFO][6704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.310 [INFO][6704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.324 [INFO][6722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.324 [INFO][6722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.324 [INFO][6722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.329 [WARNING][6722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.329 [INFO][6722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.330 [INFO][6722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.333187 containerd[1804]: 2025-07-07 00:11:28.332 [INFO][6704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:28.333673 containerd[1804]: time="2025-07-07T00:11:28.333213451Z" level=info msg="TearDown network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\" successfully" Jul 7 00:11:28.333673 containerd[1804]: time="2025-07-07T00:11:28.333233061Z" level=info msg="StopPodSandbox for \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\" returns successfully" Jul 7 00:11:28.333673 containerd[1804]: time="2025-07-07T00:11:28.333596507Z" level=info msg="RemovePodSandbox for \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\"" Jul 7 00:11:28.333673 containerd[1804]: time="2025-07-07T00:11:28.333622528Z" level=info msg="Forcibly stopping sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\"" Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.358 [WARNING][6749] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f702355d-417f-4caf-86a7-f40f67775a26", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7c2ee311a7342a4b7e583c80efc60310eba627777b1aaa368ad44843e86278db", Pod:"coredns-668d6bf9bc-2vkrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb9be6cb51a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.358 [INFO][6749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.358 [INFO][6749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" iface="eth0" netns="" Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.358 [INFO][6749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.358 [INFO][6749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.373 [INFO][6765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.373 [INFO][6765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.373 [INFO][6765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.378 [WARNING][6765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.378 [INFO][6765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" HandleID="k8s-pod-network.2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--2vkrn-eth0" Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.379 [INFO][6765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.381474 containerd[1804]: 2025-07-07 00:11:28.380 [INFO][6749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5" Jul 7 00:11:28.381474 containerd[1804]: time="2025-07-07T00:11:28.381471375Z" level=info msg="TearDown network for sandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\" successfully" Jul 7 00:11:28.383187 containerd[1804]: time="2025-07-07T00:11:28.383160717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:11:28.383224 containerd[1804]: time="2025-07-07T00:11:28.383191826Z" level=info msg="RemovePodSandbox \"2f4a62823bcfbce2360c384f6eb715c12c879c859edf17ac5d9e2273d8e045e5\" returns successfully" Jul 7 00:11:28.383539 containerd[1804]: time="2025-07-07T00:11:28.383498863Z" level=info msg="StopPodSandbox for \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\"" Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.401 [WARNING][6789] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0", GenerateName:"calico-kube-controllers-57c6d9946c-", Namespace:"calico-system", SelfLink:"", UID:"02f7ed80-c14c-4630-a553-ace08c648a2b", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c6d9946c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f", Pod:"calico-kube-controllers-57c6d9946c-d62bp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8801902be33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.402 [INFO][6789] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.402 [INFO][6789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" iface="eth0" netns="" Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.402 [INFO][6789] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.402 [INFO][6789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.447 [INFO][6803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.447 [INFO][6803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.447 [INFO][6803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.454 [WARNING][6803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.454 [INFO][6803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.455 [INFO][6803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.457926 containerd[1804]: 2025-07-07 00:11:28.456 [INFO][6789] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:28.458450 containerd[1804]: time="2025-07-07T00:11:28.457952104Z" level=info msg="TearDown network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\" successfully" Jul 7 00:11:28.458450 containerd[1804]: time="2025-07-07T00:11:28.457974128Z" level=info msg="StopPodSandbox for \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\" returns successfully" Jul 7 00:11:28.458450 containerd[1804]: time="2025-07-07T00:11:28.458294621Z" level=info msg="RemovePodSandbox for \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\"" Jul 7 00:11:28.458450 containerd[1804]: time="2025-07-07T00:11:28.458323397Z" level=info msg="Forcibly stopping sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\"" Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.485 [WARNING][6832] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0", GenerateName:"calico-kube-controllers-57c6d9946c-", Namespace:"calico-system", SelfLink:"", UID:"02f7ed80-c14c-4630-a553-ace08c648a2b", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c6d9946c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"f69d82e426c2f97f50275a908e5a921a12f9285998e24b49611ffb86e3aaff6f", Pod:"calico-kube-controllers-57c6d9946c-d62bp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8801902be33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.485 [INFO][6832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.485 [INFO][6832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" iface="eth0" netns="" Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.485 [INFO][6832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.485 [INFO][6832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.497 [INFO][6849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.497 [INFO][6849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.497 [INFO][6849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.501 [WARNING][6849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.501 [INFO][6849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" HandleID="k8s-pod-network.1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--kube--controllers--57c6d9946c--d62bp-eth0" Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.502 [INFO][6849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.503935 containerd[1804]: 2025-07-07 00:11:28.503 [INFO][6832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533" Jul 7 00:11:28.504244 containerd[1804]: time="2025-07-07T00:11:28.503958931Z" level=info msg="TearDown network for sandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\" successfully" Jul 7 00:11:28.505341 containerd[1804]: time="2025-07-07T00:11:28.505329488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:11:28.505369 containerd[1804]: time="2025-07-07T00:11:28.505363739Z" level=info msg="RemovePodSandbox \"1336019cf926b479fd4bd3a6ee3f1d30d18a6a3bc91c7c76785ae656b918e533\" returns successfully" Jul 7 00:11:28.505676 containerd[1804]: time="2025-07-07T00:11:28.505633170Z" level=info msg="StopPodSandbox for \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\"" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.522 [WARNING][6875] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.522 [INFO][6875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.522 [INFO][6875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" iface="eth0" netns="" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.522 [INFO][6875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.522 [INFO][6875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.532 [INFO][6894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.532 [INFO][6894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.532 [INFO][6894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.535 [WARNING][6894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.535 [INFO][6894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.537 [INFO][6894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.538465 containerd[1804]: 2025-07-07 00:11:28.537 [INFO][6875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:11:28.538465 containerd[1804]: time="2025-07-07T00:11:28.538453548Z" level=info msg="TearDown network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\" successfully" Jul 7 00:11:28.538816 containerd[1804]: time="2025-07-07T00:11:28.538468636Z" level=info msg="StopPodSandbox for \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\" returns successfully" Jul 7 00:11:28.538816 containerd[1804]: time="2025-07-07T00:11:28.538720151Z" level=info msg="RemovePodSandbox for \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\"" Jul 7 00:11:28.538816 containerd[1804]: time="2025-07-07T00:11:28.538736139Z" level=info msg="Forcibly stopping sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\"" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.555 [WARNING][6916] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" WorkloadEndpoint="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.555 [INFO][6916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.555 [INFO][6916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" iface="eth0" netns="" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.555 [INFO][6916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.556 [INFO][6916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.565 [INFO][6930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.565 [INFO][6930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.565 [INFO][6930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.569 [WARNING][6930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.569 [INFO][6930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" HandleID="k8s-pod-network.ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-whisker--56d4f4c756--5qjv5-eth0" Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.570 [INFO][6930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.571592 containerd[1804]: 2025-07-07 00:11:28.570 [INFO][6916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2" Jul 7 00:11:28.571853 containerd[1804]: time="2025-07-07T00:11:28.571618257Z" level=info msg="TearDown network for sandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\" successfully" Jul 7 00:11:28.573118 containerd[1804]: time="2025-07-07T00:11:28.573106567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:11:28.573155 containerd[1804]: time="2025-07-07T00:11:28.573146202Z" level=info msg="RemovePodSandbox \"ae2eb0913027a6040a380e2365b7393574ec9e7b1f235da2814428de0f5fe7a2\" returns successfully" Jul 7 00:11:28.573406 containerd[1804]: time="2025-07-07T00:11:28.573394876Z" level=info msg="StopPodSandbox for \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\"" Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.591 [WARNING][6954] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0", GenerateName:"calico-apiserver-898df8c5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5989df0-3d41-4e07-823a-56249763eb4e", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"898df8c5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e", Pod:"calico-apiserver-898df8c5d-x44jj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid65c909bea3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.591 [INFO][6954] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.591 [INFO][6954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" iface="eth0" netns="" Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.591 [INFO][6954] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.591 [INFO][6954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.603 [INFO][6970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.603 [INFO][6970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.603 [INFO][6970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.608 [WARNING][6970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.608 [INFO][6970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.609 [INFO][6970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.611435 containerd[1804]: 2025-07-07 00:11:28.610 [INFO][6954] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:28.611784 containerd[1804]: time="2025-07-07T00:11:28.611444397Z" level=info msg="TearDown network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\" successfully" Jul 7 00:11:28.611784 containerd[1804]: time="2025-07-07T00:11:28.611464120Z" level=info msg="StopPodSandbox for \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\" returns successfully" Jul 7 00:11:28.611784 containerd[1804]: time="2025-07-07T00:11:28.611746920Z" level=info msg="RemovePodSandbox for \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\"" Jul 7 00:11:28.611784 containerd[1804]: time="2025-07-07T00:11:28.611765887Z" level=info msg="Forcibly stopping sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\"" Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.632 [WARNING][6996] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0", GenerateName:"calico-apiserver-898df8c5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5989df0-3d41-4e07-823a-56249763eb4e", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"898df8c5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"159d8f42a434f3994a55ebb963666862eed20b71202477663ee1e0c94fd63d6e", Pod:"calico-apiserver-898df8c5d-x44jj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid65c909bea3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.632 [INFO][6996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.632 [INFO][6996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" iface="eth0" netns="" Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.632 [INFO][6996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.632 [INFO][6996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.644 [INFO][7012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.644 [INFO][7012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.644 [INFO][7012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.649 [WARNING][7012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.649 [INFO][7012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" HandleID="k8s-pod-network.7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--x44jj-eth0" Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.650 [INFO][7012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.651882 containerd[1804]: 2025-07-07 00:11:28.651 [INFO][6996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579" Jul 7 00:11:28.652247 containerd[1804]: time="2025-07-07T00:11:28.651912386Z" level=info msg="TearDown network for sandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\" successfully" Jul 7 00:11:28.722413 containerd[1804]: time="2025-07-07T00:11:28.722384921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:11:28.722490 containerd[1804]: time="2025-07-07T00:11:28.722430834Z" level=info msg="RemovePodSandbox \"7bd3d796a6c16b177676d4524b44678566e6365aec6156ce2694950dcefd7579\" returns successfully" Jul 7 00:11:28.722757 containerd[1804]: time="2025-07-07T00:11:28.722709175Z" level=info msg="StopPodSandbox for \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\"" Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.742 [WARNING][7037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c74f75d9-0067-422a-a233-ade5735b2645", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84", Pod:"goldmane-768f4c5c69-xjqj5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe689a5acab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.742 [INFO][7037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.742 [INFO][7037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" iface="eth0" netns="" Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.742 [INFO][7037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.742 [INFO][7037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.755 [INFO][7052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.755 [INFO][7052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.755 [INFO][7052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.759 [WARNING][7052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.759 [INFO][7052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.760 [INFO][7052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.761824 containerd[1804]: 2025-07-07 00:11:28.760 [INFO][7037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:28.762244 containerd[1804]: time="2025-07-07T00:11:28.761854659Z" level=info msg="TearDown network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\" successfully" Jul 7 00:11:28.762244 containerd[1804]: time="2025-07-07T00:11:28.761871617Z" level=info msg="StopPodSandbox for \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\" returns successfully" Jul 7 00:11:28.762244 containerd[1804]: time="2025-07-07T00:11:28.762178941Z" level=info msg="RemovePodSandbox for \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\"" Jul 7 00:11:28.762244 containerd[1804]: time="2025-07-07T00:11:28.762199880Z" level=info msg="Forcibly stopping sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\"" Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.781 [WARNING][7078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c74f75d9-0067-422a-a233-ade5735b2645", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7f5a8b1639af0b4d8daa4db8cb8327ee550ab0d6875de8f3806037ef6dc18d84", Pod:"goldmane-768f4c5c69-xjqj5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe689a5acab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.781 [INFO][7078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.781 [INFO][7078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" iface="eth0" netns="" Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.781 [INFO][7078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.781 [INFO][7078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.791 [INFO][7095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.792 [INFO][7095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.792 [INFO][7095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.795 [WARNING][7095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.795 [INFO][7095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" HandleID="k8s-pod-network.f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-goldmane--768f4c5c69--xjqj5-eth0" Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.796 [INFO][7095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:28.798150 containerd[1804]: 2025-07-07 00:11:28.797 [INFO][7078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369" Jul 7 00:11:28.798150 containerd[1804]: time="2025-07-07T00:11:28.798133915Z" level=info msg="TearDown network for sandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\" successfully" Jul 7 00:11:28.968613 containerd[1804]: time="2025-07-07T00:11:28.968552963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:11:28.968613 containerd[1804]: time="2025-07-07T00:11:28.968608570Z" level=info msg="RemovePodSandbox \"f3f61f039606d27d5c539deb1d2536df0384298781f497f3659c6665be9b9369\" returns successfully" Jul 7 00:11:28.968920 containerd[1804]: time="2025-07-07T00:11:28.968900558Z" level=info msg="StopPodSandbox for \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\"" Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:28.992 [WARNING][7120] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0", GenerateName:"calico-apiserver-898df8c5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c89e35d-ce3a-44df-8fe5-08a58c4b851d", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"898df8c5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22", Pod:"calico-apiserver-898df8c5d-mjxn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc0a1cd1f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:28.992 [INFO][7120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:28.992 [INFO][7120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" iface="eth0" netns="" Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:28.992 [INFO][7120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:28.992 [INFO][7120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:29.008 [INFO][7137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:29.008 [INFO][7137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:29.008 [INFO][7137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:29.013 [WARNING][7137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:29.013 [INFO][7137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:29.014 [INFO][7137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:29.017226 containerd[1804]: 2025-07-07 00:11:29.016 [INFO][7120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:29.017678 containerd[1804]: time="2025-07-07T00:11:29.017237827Z" level=info msg="TearDown network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\" successfully" Jul 7 00:11:29.017678 containerd[1804]: time="2025-07-07T00:11:29.017263463Z" level=info msg="StopPodSandbox for \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\" returns successfully" Jul 7 00:11:29.017678 containerd[1804]: time="2025-07-07T00:11:29.017628048Z" level=info msg="RemovePodSandbox for \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\"" Jul 7 00:11:29.017678 containerd[1804]: time="2025-07-07T00:11:29.017658172Z" level=info msg="Forcibly stopping sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\"" Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.043 [WARNING][7164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0", GenerateName:"calico-apiserver-898df8c5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c89e35d-ce3a-44df-8fe5-08a58c4b851d", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"898df8c5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"70a8bd3882b1cc385de9632787fbd2535f3a201edcf1ee6cd9facaae106dca22", Pod:"calico-apiserver-898df8c5d-mjxn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc0a1cd1f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.043 [INFO][7164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.043 [INFO][7164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" iface="eth0" netns="" Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.043 [INFO][7164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.043 [INFO][7164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.060 [INFO][7182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.060 [INFO][7182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.060 [INFO][7182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.065 [WARNING][7182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.065 [INFO][7182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" HandleID="k8s-pod-network.f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-calico--apiserver--898df8c5d--mjxn2-eth0" Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.066 [INFO][7182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:29.068896 containerd[1804]: 2025-07-07 00:11:29.067 [INFO][7164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59" Jul 7 00:11:29.068896 containerd[1804]: time="2025-07-07T00:11:29.068883130Z" level=info msg="TearDown network for sandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\" successfully" Jul 7 00:11:29.161024 containerd[1804]: time="2025-07-07T00:11:29.160967041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:11:29.161024 containerd[1804]: time="2025-07-07T00:11:29.161015025Z" level=info msg="RemovePodSandbox \"f03c10b6d081b0515ad62017331580530338a1e272fb91d8ed0347f9b2961a59\" returns successfully" Jul 7 00:11:29.161359 containerd[1804]: time="2025-07-07T00:11:29.161317280Z" level=info msg="StopPodSandbox for \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\"" Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.180 [WARNING][7210] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8ff5f712-5346-4f8c-8f1a-94e4806cd738", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc", Pod:"coredns-668d6bf9bc-t4xs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7f9e0ed048", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.181 [INFO][7210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.181 [INFO][7210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" iface="eth0" netns="" Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.181 [INFO][7210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.181 [INFO][7210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.191 [INFO][7227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.191 [INFO][7227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.191 [INFO][7227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.194 [WARNING][7227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.194 [INFO][7227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.195 [INFO][7227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:29.197316 containerd[1804]: 2025-07-07 00:11:29.196 [INFO][7210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:29.197631 containerd[1804]: time="2025-07-07T00:11:29.197338268Z" level=info msg="TearDown network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\" successfully" Jul 7 00:11:29.197631 containerd[1804]: time="2025-07-07T00:11:29.197352725Z" level=info msg="StopPodSandbox for \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\" returns successfully" Jul 7 00:11:29.197631 containerd[1804]: time="2025-07-07T00:11:29.197607228Z" level=info msg="RemovePodSandbox for \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\"" Jul 7 00:11:29.197631 containerd[1804]: time="2025-07-07T00:11:29.197628268Z" level=info msg="Forcibly stopping sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\"" Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.217 [WARNING][7252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8ff5f712-5346-4f8c-8f1a-94e4806cd738", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"7c2e370e367c5e9f61ddec21a5c30c2c00b81154dac7b7e52870686e58ac50cc", Pod:"coredns-668d6bf9bc-t4xs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7f9e0ed048", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.217 [INFO][7252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.217 [INFO][7252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" iface="eth0" netns="" Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.217 [INFO][7252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.217 [INFO][7252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.226 [INFO][7268] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.226 [INFO][7268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.226 [INFO][7268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.231 [WARNING][7268] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.231 [INFO][7268] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" HandleID="k8s-pod-network.55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-coredns--668d6bf9bc--t4xs8-eth0" Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.232 [INFO][7268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:29.233450 containerd[1804]: 2025-07-07 00:11:29.232 [INFO][7252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604" Jul 7 00:11:29.233747 containerd[1804]: time="2025-07-07T00:11:29.233479849Z" level=info msg="TearDown network for sandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\" successfully" Jul 7 00:11:29.456944 containerd[1804]: time="2025-07-07T00:11:29.456835681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:11:29.456944 containerd[1804]: time="2025-07-07T00:11:29.456899355Z" level=info msg="RemovePodSandbox \"55af71c5b90214a7c806eb06fd5948f77b58eb4ae67ad1aaa6ade32cdde0b604\" returns successfully" Jul 7 00:11:29.457273 containerd[1804]: time="2025-07-07T00:11:29.457253538Z" level=info msg="StopPodSandbox for \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\"" Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.481 [WARNING][7295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c39c50a-eb2f-499c-b38e-71339392cd68", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717", Pod:"csi-node-driver-mvdv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif225666af39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.481 [INFO][7295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.481 [INFO][7295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" iface="eth0" netns="" Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.481 [INFO][7295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.481 [INFO][7295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.495 [INFO][7314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.495 [INFO][7314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.495 [INFO][7314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.500 [WARNING][7314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.501 [INFO][7314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.501 [INFO][7314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:29.503833 containerd[1804]: 2025-07-07 00:11:29.502 [INFO][7295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:29.504323 containerd[1804]: time="2025-07-07T00:11:29.503853412Z" level=info msg="TearDown network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\" successfully" Jul 7 00:11:29.504323 containerd[1804]: time="2025-07-07T00:11:29.503873560Z" level=info msg="StopPodSandbox for \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\" returns successfully" Jul 7 00:11:29.504323 containerd[1804]: time="2025-07-07T00:11:29.504209315Z" level=info msg="RemovePodSandbox for \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\"" Jul 7 00:11:29.504323 containerd[1804]: time="2025-07-07T00:11:29.504231632Z" level=info msg="Forcibly stopping sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\"" Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.527 [WARNING][7341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c39c50a-eb2f-499c-b38e-71339392cd68", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 10, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-a-fd0ee851f3", ContainerID:"eadb2063ffd27ec169d2ef7065f0850a9b9da502853e7928a2c6f0b812a46717", Pod:"csi-node-driver-mvdv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif225666af39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.527 [INFO][7341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.527 [INFO][7341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" iface="eth0" netns="" Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.527 [INFO][7341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.527 [INFO][7341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.541 [INFO][7358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.541 [INFO][7358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.541 [INFO][7358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.546 [WARNING][7358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.546 [INFO][7358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" HandleID="k8s-pod-network.9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Workload="ci--4081.3.4--a--fd0ee851f3-k8s-csi--node--driver--mvdv8-eth0" Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.547 [INFO][7358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:11:29.549522 containerd[1804]: 2025-07-07 00:11:29.548 [INFO][7341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838" Jul 7 00:11:29.549925 containerd[1804]: time="2025-07-07T00:11:29.549526241Z" level=info msg="TearDown network for sandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\" successfully" Jul 7 00:11:29.551238 containerd[1804]: time="2025-07-07T00:11:29.551225186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:11:29.551275 containerd[1804]: time="2025-07-07T00:11:29.551256686Z" level=info msg="RemovePodSandbox \"9382e1a8d74a9616708f83d84148c00a4d859894bd18f46b3143bc040bdf3838\" returns successfully" Jul 7 00:11:34.836025 kubelet[3067]: I0707 00:11:34.835924 3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:11:35.829214 kubelet[3067]: I0707 00:11:35.829193 3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:16:38.152971 update_engine[1799]: I20250707 00:16:38.152894 1799 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 00:16:38.152971 update_engine[1799]: I20250707 00:16:38.152941 1799 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153115 1799 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153581 1799 omaha_request_params.cc:62] Current group set to lts Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153685 1799 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153698 1799 update_attempter.cc:643] Scheduling an action processor start. Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153716 1799 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153750 1799 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153828 1799 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153842 1799 omaha_request_action.cc:272] Request: Jul 7 00:16:38.154293 update_engine[1799]: Jul 7 00:16:38.154293 update_engine[1799]: Jul 7 00:16:38.154293 update_engine[1799]: Jul 7 00:16:38.154293 update_engine[1799]: Jul 7 00:16:38.154293 update_engine[1799]: Jul 7 00:16:38.154293 update_engine[1799]: Jul 7 00:16:38.154293 update_engine[1799]: Jul 7 00:16:38.154293 update_engine[1799]: Jul 7 00:16:38.154293 update_engine[1799]: I20250707 00:16:38.153849 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:16:38.154827 locksmithd[1841]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 00:16:38.155380 update_engine[1799]: I20250707 00:16:38.155332 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:16:38.155686 update_engine[1799]: I20250707 00:16:38.155651 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:16:38.156463 update_engine[1799]: E20250707 00:16:38.156411 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:16:38.156545 update_engine[1799]: I20250707 00:16:38.156481 1799 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 00:16:41.122325 systemd[1]: Started sshd@9-147.28.180.255:22-139.178.89.65:58696.service - OpenSSH per-connection server daemon (139.178.89.65:58696). Jul 7 00:16:41.257067 sshd[8607]: Accepted publickey for core from 139.178.89.65 port 58696 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:16:41.258448 sshd[8607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:41.263008 systemd-logind[1794]: New session 12 of user core. Jul 7 00:16:41.273585 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:16:41.412806 sshd[8607]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:41.414322 systemd[1]: sshd@9-147.28.180.255:22-139.178.89.65:58696.service: Deactivated successfully. Jul 7 00:16:41.415204 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:16:41.415942 systemd-logind[1794]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:16:41.416627 systemd-logind[1794]: Removed session 12. Jul 7 00:16:46.432634 systemd[1]: Started sshd@10-147.28.180.255:22-139.178.89.65:58710.service - OpenSSH per-connection server daemon (139.178.89.65:58710). Jul 7 00:16:46.521911 sshd[8716]: Accepted publickey for core from 139.178.89.65 port 58710 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:16:46.523490 sshd[8716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:46.528271 systemd-logind[1794]: New session 13 of user core. Jul 7 00:16:46.538677 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:16:46.665681 sshd[8716]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:46.667233 systemd[1]: sshd@10-147.28.180.255:22-139.178.89.65:58710.service: Deactivated successfully. Jul 7 00:16:46.668121 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:16:46.668781 systemd-logind[1794]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:16:46.669326 systemd-logind[1794]: Removed session 13. Jul 7 00:16:48.103331 update_engine[1799]: I20250707 00:16:48.103183 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:16:48.104243 update_engine[1799]: I20250707 00:16:48.103905 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:16:48.104672 update_engine[1799]: I20250707 00:16:48.104563 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:16:48.105350 update_engine[1799]: E20250707 00:16:48.105240 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:16:48.105540 update_engine[1799]: I20250707 00:16:48.105407 1799 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 00:16:51.691513 systemd[1]: Started sshd@11-147.28.180.255:22-139.178.89.65:35270.service - OpenSSH per-connection server daemon (139.178.89.65:35270). Jul 7 00:16:51.719997 sshd[8744]: Accepted publickey for core from 139.178.89.65 port 35270 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:16:51.720698 sshd[8744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:51.723039 systemd-logind[1794]: New session 14 of user core. Jul 7 00:16:51.740586 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:16:51.839114 sshd[8744]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:51.860251 systemd[1]: sshd@11-147.28.180.255:22-139.178.89.65:35270.service: Deactivated successfully. Jul 7 00:16:51.861309 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:16:51.862157 systemd-logind[1794]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:16:51.863230 systemd[1]: Started sshd@12-147.28.180.255:22-139.178.89.65:35286.service - OpenSSH per-connection server daemon (139.178.89.65:35286). Jul 7 00:16:51.863889 systemd-logind[1794]: Removed session 14. Jul 7 00:16:51.911522 sshd[8770]: Accepted publickey for core from 139.178.89.65 port 35286 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:16:51.912594 sshd[8770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:51.916372 systemd-logind[1794]: New session 15 of user core. Jul 7 00:16:51.935384 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:16:52.098249 sshd[8770]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:52.109786 systemd[1]: sshd@12-147.28.180.255:22-139.178.89.65:35286.service: Deactivated successfully. Jul 7 00:16:52.110623 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:16:52.111299 systemd-logind[1794]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:16:52.112052 systemd[1]: Started sshd@13-147.28.180.255:22-139.178.89.65:35300.service - OpenSSH per-connection server daemon (139.178.89.65:35300). Jul 7 00:16:52.112662 systemd-logind[1794]: Removed session 15. Jul 7 00:16:52.139880 sshd[8794]: Accepted publickey for core from 139.178.89.65 port 35300 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:16:52.140566 sshd[8794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:52.143045 systemd-logind[1794]: New session 16 of user core. Jul 7 00:16:52.157668 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:16:52.321853 sshd[8794]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:52.323970 systemd[1]: sshd@13-147.28.180.255:22-139.178.89.65:35300.service: Deactivated successfully. Jul 7 00:16:52.324911 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:16:52.325366 systemd-logind[1794]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:16:52.325982 systemd-logind[1794]: Removed session 16. Jul 7 00:16:57.345870 systemd[1]: Started sshd@14-147.28.180.255:22-139.178.89.65:35304.service - OpenSSH per-connection server daemon (139.178.89.65:35304). Jul 7 00:16:57.380623 sshd[8829]: Accepted publickey for core from 139.178.89.65 port 35304 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:16:57.381404 sshd[8829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:57.384085 systemd-logind[1794]: New session 17 of user core. Jul 7 00:16:57.399660 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:16:57.494897 sshd[8829]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:57.496537 systemd[1]: sshd@14-147.28.180.255:22-139.178.89.65:35304.service: Deactivated successfully. Jul 7 00:16:57.497540 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:16:57.498267 systemd-logind[1794]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:16:57.498926 systemd-logind[1794]: Removed session 17. Jul 7 00:16:58.103513 update_engine[1799]: I20250707 00:16:58.103355 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:16:58.104356 update_engine[1799]: I20250707 00:16:58.103890 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:16:58.104376 update_engine[1799]: I20250707 00:16:58.104337 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:16:58.104857 update_engine[1799]: E20250707 00:16:58.104815 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:16:58.104857 update_engine[1799]: I20250707 00:16:58.104841 1799 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 00:17:02.516975 systemd[1]: Started sshd@15-147.28.180.255:22-139.178.89.65:33972.service - OpenSSH per-connection server daemon (139.178.89.65:33972). Jul 7 00:17:02.545244 sshd[8856]: Accepted publickey for core from 139.178.89.65 port 33972 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:02.545941 sshd[8856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:02.548570 systemd-logind[1794]: New session 18 of user core. Jul 7 00:17:02.560580 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:17:02.656343 sshd[8856]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:02.657910 systemd[1]: sshd@15-147.28.180.255:22-139.178.89.65:33972.service: Deactivated successfully. Jul 7 00:17:02.658822 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:17:02.659533 systemd-logind[1794]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:17:02.660070 systemd-logind[1794]: Removed session 18. Jul 7 00:17:07.683867 systemd[1]: Started sshd@16-147.28.180.255:22-139.178.89.65:33986.service - OpenSSH per-connection server daemon (139.178.89.65:33986). Jul 7 00:17:07.715869 sshd[8884]: Accepted publickey for core from 139.178.89.65 port 33986 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:07.716552 sshd[8884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:07.718887 systemd-logind[1794]: New session 19 of user core. Jul 7 00:17:07.737380 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:17:07.836667 sshd[8884]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:07.844650 systemd[1]: sshd@16-147.28.180.255:22-139.178.89.65:33986.service: Deactivated successfully. Jul 7 00:17:07.848716 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:17:07.850624 systemd-logind[1794]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:17:07.853552 systemd-logind[1794]: Removed session 19. Jul 7 00:17:08.102789 update_engine[1799]: I20250707 00:17:08.102636 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:17:08.103646 update_engine[1799]: I20250707 00:17:08.103197 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:17:08.103783 update_engine[1799]: I20250707 00:17:08.103669 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:17:08.104569 update_engine[1799]: E20250707 00:17:08.104442 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:17:08.104776 update_engine[1799]: I20250707 00:17:08.104583 1799 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:17:08.104776 update_engine[1799]: I20250707 00:17:08.104614 1799 omaha_request_action.cc:617] Omaha request response: Jul 7 00:17:08.105001 update_engine[1799]: E20250707 00:17:08.104781 1799 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 00:17:08.105001 update_engine[1799]: I20250707 00:17:08.104835 1799 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 00:17:08.105001 update_engine[1799]: I20250707 00:17:08.104852 1799 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:17:08.105001 update_engine[1799]: I20250707 00:17:08.104868 1799 update_attempter.cc:306] Processing Done. Jul 7 00:17:08.105001 update_engine[1799]: E20250707 00:17:08.104900 1799 update_attempter.cc:619] Update failed. Jul 7 00:17:08.105001 update_engine[1799]: I20250707 00:17:08.104918 1799 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 00:17:08.105001 update_engine[1799]: I20250707 00:17:08.104933 1799 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 00:17:08.105001 update_engine[1799]: I20250707 00:17:08.104949 1799 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 00:17:08.105986 update_engine[1799]: I20250707 00:17:08.105105 1799 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:17:08.105986 update_engine[1799]: I20250707 00:17:08.105216 1799 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:17:08.105986 update_engine[1799]: I20250707 00:17:08.105240 1799 omaha_request_action.cc:272] Request: Jul 7 00:17:08.105986 update_engine[1799]: Jul 7 00:17:08.105986 update_engine[1799]: Jul 7 00:17:08.105986 update_engine[1799]: Jul 7 00:17:08.105986 update_engine[1799]: Jul 7 00:17:08.105986 update_engine[1799]: Jul 7 00:17:08.105986 update_engine[1799]: Jul 7 00:17:08.105986 update_engine[1799]: I20250707 00:17:08.105257 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:17:08.105986 update_engine[1799]: I20250707 00:17:08.105654 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:17:08.106935 locksmithd[1841]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 00:17:08.107628 update_engine[1799]: I20250707 00:17:08.106071 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:17:08.107628 update_engine[1799]: E20250707 00:17:08.106894 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:17:08.107628 update_engine[1799]: I20250707 00:17:08.107024 1799 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:17:08.107628 update_engine[1799]: I20250707 00:17:08.107052 1799 omaha_request_action.cc:617] Omaha request response: Jul 7 00:17:08.107628 update_engine[1799]: I20250707 00:17:08.107070 1799 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:17:08.107628 update_engine[1799]: I20250707 00:17:08.107086 1799 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:17:08.107628 update_engine[1799]: I20250707 00:17:08.107102 1799 update_attempter.cc:306] Processing Done. Jul 7 00:17:08.107628 update_engine[1799]: I20250707 00:17:08.107120 1799 update_attempter.cc:310] Error event sent. Jul 7 00:17:08.107628 update_engine[1799]: I20250707 00:17:08.107193 1799 update_check_scheduler.cc:74] Next update check in 44m58s Jul 7 00:17:08.108600 locksmithd[1841]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 00:17:12.853961 systemd[1]: Started sshd@17-147.28.180.255:22-139.178.89.65:45620.service - OpenSSH per-connection server daemon (139.178.89.65:45620). Jul 7 00:17:12.882839 sshd[9003]: Accepted publickey for core from 139.178.89.65 port 45620 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:12.883588 sshd[9003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:12.886323 systemd-logind[1794]: New session 20 of user core. Jul 7 00:17:12.907626 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:17:13.001511 sshd[9003]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:13.015799 systemd[1]: sshd@17-147.28.180.255:22-139.178.89.65:45620.service: Deactivated successfully. Jul 7 00:17:13.016632 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:17:13.017376 systemd-logind[1794]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:17:13.017985 systemd[1]: Started sshd@18-147.28.180.255:22-139.178.89.65:45632.service - OpenSSH per-connection server daemon (139.178.89.65:45632). Jul 7 00:17:13.018553 systemd-logind[1794]: Removed session 20. Jul 7 00:17:13.053600 sshd[9029]: Accepted publickey for core from 139.178.89.65 port 45632 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:13.054408 sshd[9029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:13.057345 systemd-logind[1794]: New session 21 of user core. Jul 7 00:17:13.069387 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:17:13.168707 sshd[9029]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:13.195772 systemd[1]: sshd@18-147.28.180.255:22-139.178.89.65:45632.service: Deactivated successfully. Jul 7 00:17:13.196629 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:17:13.197331 systemd-logind[1794]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:17:13.197924 systemd[1]: Started sshd@19-147.28.180.255:22-139.178.89.65:45634.service - OpenSSH per-connection server daemon (139.178.89.65:45634). Jul 7 00:17:13.198370 systemd-logind[1794]: Removed session 21. Jul 7 00:17:13.232384 sshd[9052]: Accepted publickey for core from 139.178.89.65 port 45634 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:13.233136 sshd[9052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:13.235958 systemd-logind[1794]: New session 22 of user core. Jul 7 00:17:13.250287 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:17:14.166452 sshd[9052]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:14.178873 systemd[1]: sshd@19-147.28.180.255:22-139.178.89.65:45634.service: Deactivated successfully. Jul 7 00:17:14.180241 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:17:14.181342 systemd-logind[1794]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:17:14.182456 systemd[1]: Started sshd@20-147.28.180.255:22-139.178.89.65:45640.service - OpenSSH per-connection server daemon (139.178.89.65:45640). Jul 7 00:17:14.183043 systemd-logind[1794]: Removed session 22. Jul 7 00:17:14.232229 sshd[9093]: Accepted publickey for core from 139.178.89.65 port 45640 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:14.235749 sshd[9093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:14.247475 systemd-logind[1794]: New session 23 of user core. Jul 7 00:17:14.262586 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:17:14.446122 sshd[9093]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:14.462032 systemd[1]: sshd@20-147.28.180.255:22-139.178.89.65:45640.service: Deactivated successfully. Jul 7 00:17:14.466121 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:17:14.469694 systemd-logind[1794]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:17:14.482840 systemd[1]: Started sshd@21-147.28.180.255:22-139.178.89.65:45644.service - OpenSSH per-connection server daemon (139.178.89.65:45644). Jul 7 00:17:14.486262 systemd-logind[1794]: Removed session 23. Jul 7 00:17:14.571523 sshd[9119]: Accepted publickey for core from 139.178.89.65 port 45644 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:14.572557 sshd[9119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:14.574954 systemd-logind[1794]: New session 24 of user core. Jul 7 00:17:14.581328 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:17:14.657963 sshd[9119]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:14.659829 systemd[1]: sshd@21-147.28.180.255:22-139.178.89.65:45644.service: Deactivated successfully. Jul 7 00:17:14.660713 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:17:14.661038 systemd-logind[1794]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:17:14.661653 systemd-logind[1794]: Removed session 24. Jul 7 00:17:19.674950 systemd[1]: Started sshd@22-147.28.180.255:22-139.178.89.65:58400.service - OpenSSH per-connection server daemon (139.178.89.65:58400). Jul 7 00:17:19.703588 sshd[9186]: Accepted publickey for core from 139.178.89.65 port 58400 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:19.704311 sshd[9186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:19.706884 systemd-logind[1794]: New session 25 of user core. Jul 7 00:17:19.718418 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:17:19.803290 sshd[9186]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:19.804837 systemd[1]: sshd@22-147.28.180.255:22-139.178.89.65:58400.service: Deactivated successfully. Jul 7 00:17:19.805764 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:17:19.806438 systemd-logind[1794]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:17:19.806943 systemd-logind[1794]: Removed session 25. Jul 7 00:17:24.818937 systemd[1]: Started sshd@23-147.28.180.255:22-139.178.89.65:58412.service - OpenSSH per-connection server daemon (139.178.89.65:58412). Jul 7 00:17:24.864337 sshd[9230]: Accepted publickey for core from 139.178.89.65 port 58412 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:24.864991 sshd[9230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:24.867588 systemd-logind[1794]: New session 26 of user core. Jul 7 00:17:24.878624 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:17:24.973884 sshd[9230]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:24.975497 systemd[1]: sshd@23-147.28.180.255:22-139.178.89.65:58412.service: Deactivated successfully. Jul 7 00:17:24.976391 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:17:24.977061 systemd-logind[1794]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:17:24.977782 systemd-logind[1794]: Removed session 26. Jul 7 00:17:30.005513 systemd[1]: Started sshd@24-147.28.180.255:22-139.178.89.65:58840.service - OpenSSH per-connection server daemon (139.178.89.65:58840). Jul 7 00:17:30.086797 sshd[9258]: Accepted publickey for core from 139.178.89.65 port 58840 ssh2: RSA SHA256:cP9RXefuyWP+JgN1ps3XtJ21hLQZH71jpAyvYZSeMs8 Jul 7 00:17:30.088431 sshd[9258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:30.092395 systemd-logind[1794]: New session 27 of user core. Jul 7 00:17:30.105472 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:17:30.264702 sshd[9258]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:30.271750 systemd[1]: sshd@24-147.28.180.255:22-139.178.89.65:58840.service: Deactivated successfully. Jul 7 00:17:30.276062 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:17:30.279743 systemd-logind[1794]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:17:30.282448 systemd-logind[1794]: Removed session 27.