Aug 13 08:40:44.014012 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 08:40:44.014026 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 08:40:44.014033 kernel: BIOS-provided physical RAM map: Aug 13 08:40:44.014037 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Aug 13 08:40:44.014041 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Aug 13 08:40:44.014045 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Aug 13 08:40:44.014050 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Aug 13 08:40:44.014054 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Aug 13 08:40:44.014058 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable Aug 13 08:40:44.014062 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS Aug 13 08:40:44.014066 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved Aug 13 08:40:44.014071 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable Aug 13 08:40:44.014076 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Aug 13 08:40:44.014080 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Aug 13 08:40:44.014085 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Aug 13 08:40:44.014090 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Aug 13 08:40:44.014095 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Aug 13 08:40:44.014100 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Aug 13 08:40:44.014105 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 13 08:40:44.014109 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Aug 13 08:40:44.014114 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Aug 13 08:40:44.014119 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 08:40:44.014123 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Aug 13 08:40:44.014128 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Aug 13 08:40:44.014133 kernel: NX (Execute Disable) protection: active Aug 13 08:40:44.014137 kernel: APIC: Static calls initialized Aug 13 08:40:44.014142 kernel: SMBIOS 3.2.1 present. Aug 13 08:40:44.014147 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 2.6 12/03/2024 Aug 13 08:40:44.014152 kernel: tsc: Detected 3400.000 MHz processor Aug 13 08:40:44.014157 kernel: tsc: Detected 3399.906 MHz TSC Aug 13 08:40:44.014162 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 08:40:44.014167 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 08:40:44.014172 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Aug 13 08:40:44.014177 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Aug 13 08:40:44.014182 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 08:40:44.014186 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Aug 13 08:40:44.014191 kernel: Using GB pages for direct mapping Aug 13 08:40:44.014197 kernel: ACPI: Early table checksum verification disabled Aug 13 08:40:44.014202 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Aug 13 08:40:44.014207 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Aug 13 08:40:44.014213 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Aug 13 08:40:44.014218 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Aug 13 08:40:44.014223 kernel: ACPI: FACS 0x000000008C66DF80 000040 Aug 13 08:40:44.014229 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Aug 13 08:40:44.014234 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Aug 13 08:40:44.014240 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Aug 13 08:40:44.014245 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Aug 13 08:40:44.014250 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Aug 13 08:40:44.014255 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Aug 13 08:40:44.014260 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Aug 13 08:40:44.014265 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Aug 13 08:40:44.014271 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014276 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Aug 13 08:40:44.014281 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Aug 13 08:40:44.014286 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014291 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014296 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Aug 13 08:40:44.014301 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Aug 13 08:40:44.014306 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014311 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014317 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Aug 13 08:40:44.014322 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Aug 13 08:40:44.014327 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Aug 13 08:40:44.014332 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Aug 13 08:40:44.014337 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Aug 13 08:40:44.014342 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Aug 13 08:40:44.014347 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Aug 13 08:40:44.014352 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Aug 13 08:40:44.014358 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Aug 13 08:40:44.014363 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Aug 13 08:40:44.014368 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Aug 13 08:40:44.014373 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Aug 13 08:40:44.014378 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Aug 13 08:40:44.014383 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Aug 13 08:40:44.014388 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Aug 13 08:40:44.014393 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Aug 13 08:40:44.014398 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Aug 13 08:40:44.014404 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Aug 13 08:40:44.014409 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Aug 13 08:40:44.014414 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Aug 13 08:40:44.014419 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Aug 13 08:40:44.014424 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Aug 13 08:40:44.014429 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Aug 13 08:40:44.014434 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Aug 13 08:40:44.014439 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Aug 13 08:40:44.014444 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Aug 13 08:40:44.014450 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Aug 13 08:40:44.014455 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Aug 13 08:40:44.014460 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Aug 13 08:40:44.014465 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Aug 13 08:40:44.014470 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Aug 13 08:40:44.014475 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Aug 13 08:40:44.014480 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Aug 13 08:40:44.014485 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Aug 13 08:40:44.014490 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Aug 13 08:40:44.014496 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Aug 13 08:40:44.014501 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Aug 13 08:40:44.014506 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Aug 13 08:40:44.014511 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Aug 13 08:40:44.014516 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Aug 13 08:40:44.014521 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Aug 13 08:40:44.014526 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Aug 13 08:40:44.014531 kernel: No NUMA configuration found Aug 13 08:40:44.014536 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Aug 13 08:40:44.014547 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Aug 13 08:40:44.014553 kernel: Zone ranges: Aug 13 08:40:44.014558 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 08:40:44.014563 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 08:40:44.014569 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Aug 13 08:40:44.014574 kernel: Movable zone start for each node Aug 13 08:40:44.014579 kernel: Early memory node ranges Aug 13 08:40:44.014584 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Aug 13 08:40:44.014589 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Aug 13 08:40:44.014594 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] Aug 13 08:40:44.014600 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] Aug 13 08:40:44.014605 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Aug 13 08:40:44.014610 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Aug 13 08:40:44.014615 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Aug 13 08:40:44.014624 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Aug 13 08:40:44.014630 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 08:40:44.014635 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Aug 13 08:40:44.014641 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Aug 13 08:40:44.014647 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Aug 13 08:40:44.014652 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Aug 13 08:40:44.014658 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Aug 13 08:40:44.014663 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Aug 13 08:40:44.014669 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Aug 13 08:40:44.014674 kernel: ACPI: PM-Timer IO Port: 0x1808 Aug 13 08:40:44.014680 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 08:40:44.014685 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 08:40:44.014690 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 08:40:44.014696 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 08:40:44.014702 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 08:40:44.014707 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 08:40:44.014713 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 08:40:44.014718 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 08:40:44.014723 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 08:40:44.014729 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 08:40:44.014734 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 08:40:44.014739 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 08:40:44.014745 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 08:40:44.014751 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 08:40:44.014756 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 08:40:44.014762 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 08:40:44.014767 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Aug 13 08:40:44.014772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 08:40:44.014778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 08:40:44.014783 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 08:40:44.014789 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 08:40:44.014794 kernel: TSC deadline timer available Aug 13 08:40:44.014801 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Aug 13 08:40:44.014806 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Aug 13 08:40:44.014812 kernel: Booting paravirtualized kernel on bare hardware Aug 13 08:40:44.014817 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 08:40:44.014823 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Aug 13 08:40:44.014828 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Aug 13 08:40:44.014834 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Aug 13 08:40:44.014839 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 08:40:44.014846 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 08:40:44.014851 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 08:40:44.014857 kernel: random: crng init done Aug 13 08:40:44.014862 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Aug 13 08:40:44.014867 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Aug 13 08:40:44.014873 kernel: Fallback order for Node 0: 0 Aug 13 08:40:44.014878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Aug 13 08:40:44.014884 kernel: Policy zone: Normal Aug 13 08:40:44.014890 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 08:40:44.014895 kernel: software IO TLB: area num 16. Aug 13 08:40:44.014901 kernel: Memory: 32720308K/33452984K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 732416K reserved, 0K cma-reserved) Aug 13 08:40:44.014906 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 08:40:44.014912 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 08:40:44.014917 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 08:40:44.014923 kernel: Dynamic Preempt: voluntary Aug 13 08:40:44.014928 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 08:40:44.014934 kernel: rcu: RCU event tracing is enabled. Aug 13 08:40:44.014941 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 08:40:44.014946 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 08:40:44.014952 kernel: Rude variant of Tasks RCU enabled. Aug 13 08:40:44.014957 kernel: Tracing variant of Tasks RCU enabled. Aug 13 08:40:44.014963 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 08:40:44.014968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 08:40:44.014973 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Aug 13 08:40:44.014979 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 08:40:44.014984 kernel: Console: colour dummy device 80x25 Aug 13 08:40:44.014990 kernel: printk: console [tty0] enabled Aug 13 08:40:44.014996 kernel: printk: console [ttyS1] enabled Aug 13 08:40:44.015001 kernel: ACPI: Core revision 20230628 Aug 13 08:40:44.015007 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Aug 13 08:40:44.015013 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 08:40:44.015018 kernel: DMAR: Host address width 39 Aug 13 08:40:44.015023 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Aug 13 08:40:44.015029 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Aug 13 08:40:44.015034 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Aug 13 08:40:44.015040 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Aug 13 08:40:44.015046 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Aug 13 08:40:44.015051 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Aug 13 08:40:44.015057 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Aug 13 08:40:44.015062 kernel: x2apic enabled Aug 13 08:40:44.015068 kernel: APIC: Switched APIC routing to: cluster x2apic Aug 13 08:40:44.015073 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Aug 13 08:40:44.015079 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Aug 13 08:40:44.015084 kernel: CPU0: Thermal monitoring enabled (TM1) Aug 13 08:40:44.015089 kernel: process: using mwait in idle threads Aug 13 08:40:44.015096 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 08:40:44.015101 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 08:40:44.015106 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 08:40:44.015112 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 08:40:44.015117 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 08:40:44.015122 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 08:40:44.015128 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 08:40:44.015133 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 08:40:44.015138 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 08:40:44.015143 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 08:40:44.015149 kernel: TAA: Mitigation: TSX disabled Aug 13 08:40:44.015155 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Aug 13 08:40:44.015160 kernel: SRBDS: Mitigation: Microcode Aug 13 08:40:44.015166 kernel: GDS: Mitigation: Microcode Aug 13 08:40:44.015171 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 08:40:44.015177 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 08:40:44.015182 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 08:40:44.015187 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 08:40:44.015193 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 13 08:40:44.015198 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 13 08:40:44.015203 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 08:40:44.015209 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 13 08:40:44.015215 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 13 08:40:44.015220 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Aug 13 08:40:44.015226 kernel: Freeing SMP alternatives memory: 32K Aug 13 08:40:44.015231 kernel: pid_max: default: 32768 minimum: 301 Aug 13 08:40:44.015236 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 08:40:44.015242 kernel: landlock: Up and running. Aug 13 08:40:44.015247 kernel: SELinux: Initializing. Aug 13 08:40:44.015252 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 08:40:44.015257 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 08:40:44.015263 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 08:40:44.015268 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 08:40:44.015275 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 08:40:44.015280 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 08:40:44.015286 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Aug 13 08:40:44.015291 kernel: ... version: 4 Aug 13 08:40:44.015297 kernel: ... bit width: 48 Aug 13 08:40:44.015302 kernel: ... generic registers: 4 Aug 13 08:40:44.015307 kernel: ... value mask: 0000ffffffffffff Aug 13 08:40:44.015313 kernel: ... max period: 00007fffffffffff Aug 13 08:40:44.015318 kernel: ... fixed-purpose events: 3 Aug 13 08:40:44.015325 kernel: ... event mask: 000000070000000f Aug 13 08:40:44.015330 kernel: signal: max sigframe size: 2032 Aug 13 08:40:44.015335 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Aug 13 08:40:44.015341 kernel: rcu: Hierarchical SRCU implementation. Aug 13 08:40:44.015346 kernel: rcu: Max phase no-delay instances is 400. Aug 13 08:40:44.015352 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Aug 13 08:40:44.015357 kernel: smp: Bringing up secondary CPUs ... Aug 13 08:40:44.015363 kernel: smpboot: x86: Booting SMP configuration: Aug 13 08:40:44.015368 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Aug 13 08:40:44.015375 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 08:40:44.015380 kernel: smp: Brought up 1 node, 16 CPUs Aug 13 08:40:44.015386 kernel: smpboot: Max logical packages: 1 Aug 13 08:40:44.015391 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Aug 13 08:40:44.015396 kernel: devtmpfs: initialized Aug 13 08:40:44.015402 kernel: x86/mm: Memory block size: 128MB Aug 13 08:40:44.015407 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) Aug 13 08:40:44.015413 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Aug 13 08:40:44.015418 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 08:40:44.015424 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 08:40:44.015430 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 08:40:44.015435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 08:40:44.015441 kernel: audit: initializing netlink subsys (disabled) Aug 13 08:40:44.015446 kernel: audit: type=2000 audit(1755074438.039:1): state=initialized audit_enabled=0 res=1 Aug 13 08:40:44.015451 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 08:40:44.015457 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 08:40:44.015462 kernel: cpuidle: using governor menu Aug 13 08:40:44.015467 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 08:40:44.015474 kernel: dca service started, version 1.12.1 Aug 13 08:40:44.015479 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Aug 13 08:40:44.015485 kernel: PCI: Using configuration type 1 for base access Aug 13 08:40:44.015490 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Aug 13 08:40:44.015495 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 08:40:44.015501 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 08:40:44.015506 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 08:40:44.015512 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 08:40:44.015518 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 08:40:44.015523 kernel: ACPI: Added _OSI(Module Device) Aug 13 08:40:44.015529 kernel: ACPI: Added _OSI(Processor Device) Aug 13 08:40:44.015534 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 08:40:44.015541 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Aug 13 08:40:44.015547 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015552 kernel: ACPI: SSDT 0xFFFF960B41AF0800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Aug 13 08:40:44.015558 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015563 kernel: ACPI: SSDT 0xFFFF960B41AEE800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Aug 13 08:40:44.015570 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015575 kernel: ACPI: SSDT 0xFFFF960B40246300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Aug 13 08:40:44.015581 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015586 kernel: ACPI: SSDT 0xFFFF960B41AE9000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Aug 13 08:40:44.015591 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015597 kernel: ACPI: SSDT 0xFFFF960B4012B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Aug 13 08:40:44.015602 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015607 kernel: ACPI: SSDT 0xFFFF960B41AF6C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Aug 13 08:40:44.015613 kernel: ACPI: _OSC evaluated successfully for all CPUs Aug 13 08:40:44.015618 kernel: ACPI: Interpreter enabled Aug 13 08:40:44.015625 kernel: ACPI: PM: (supports S0 S5) Aug 13 08:40:44.015630 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 08:40:44.015636 kernel: HEST: Enabling Firmware First mode for corrected errors. Aug 13 08:40:44.015641 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Aug 13 08:40:44.015646 kernel: HEST: Table parsing has been initialized. Aug 13 08:40:44.015652 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Aug 13 08:40:44.015657 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 08:40:44.015663 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 08:40:44.015668 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Aug 13 08:40:44.015674 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Aug 13 08:40:44.015680 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Aug 13 08:40:44.015686 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Aug 13 08:40:44.015691 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Aug 13 08:40:44.015696 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Aug 13 08:40:44.015702 kernel: ACPI: \_TZ_.FN00: New power resource Aug 13 08:40:44.015707 kernel: ACPI: \_TZ_.FN01: New power resource Aug 13 08:40:44.015713 kernel: ACPI: \_TZ_.FN02: New power resource Aug 13 08:40:44.015718 kernel: ACPI: \_TZ_.FN03: New power resource Aug 13 08:40:44.015725 kernel: ACPI: \_TZ_.FN04: New power resource Aug 13 08:40:44.015730 kernel: ACPI: \PIN_: New power resource Aug 13 08:40:44.015735 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Aug 13 08:40:44.015812 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 08:40:44.015866 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Aug 13 08:40:44.015915 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Aug 13 08:40:44.015923 kernel: PCI host bridge to bus 0000:00 Aug 13 08:40:44.015972 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 08:40:44.016019 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 08:40:44.016063 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 08:40:44.016104 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Aug 13 08:40:44.016147 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Aug 13 08:40:44.016188 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Aug 13 08:40:44.016248 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Aug 13 08:40:44.016307 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Aug 13 08:40:44.016358 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.016413 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Aug 13 08:40:44.016462 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Aug 13 08:40:44.016514 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Aug 13 08:40:44.016567 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Aug 13 08:40:44.016624 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Aug 13 08:40:44.016672 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Aug 13 08:40:44.016721 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Aug 13 08:40:44.016773 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Aug 13 08:40:44.016822 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Aug 13 08:40:44.016869 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Aug 13 08:40:44.016923 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Aug 13 08:40:44.016974 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 08:40:44.017026 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Aug 13 08:40:44.017074 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 08:40:44.017127 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Aug 13 08:40:44.017176 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Aug 13 08:40:44.017227 kernel: pci 0000:00:16.0: PME# supported from D3hot Aug 13 08:40:44.017278 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Aug 13 08:40:44.017326 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Aug 13 08:40:44.017383 kernel: pci 0000:00:16.1: PME# supported from D3hot Aug 13 08:40:44.017434 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Aug 13 08:40:44.017483 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Aug 13 08:40:44.017530 kernel: pci 0000:00:16.4: PME# supported from D3hot Aug 13 08:40:44.017588 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Aug 13 08:40:44.017637 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Aug 13 08:40:44.017685 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Aug 13 08:40:44.017732 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Aug 13 08:40:44.017780 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Aug 13 08:40:44.017827 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Aug 13 08:40:44.017876 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Aug 13 08:40:44.017927 kernel: pci 0000:00:17.0: PME# supported from D3hot Aug 13 08:40:44.017983 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Aug 13 08:40:44.018034 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018087 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Aug 13 08:40:44.018139 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018192 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Aug 13 08:40:44.018242 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018294 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Aug 13 08:40:44.018344 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018396 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Aug 13 08:40:44.018448 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018501 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Aug 13 08:40:44.018583 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 08:40:44.018637 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Aug 13 08:40:44.018694 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Aug 13 08:40:44.018744 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Aug 13 08:40:44.018795 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Aug 13 08:40:44.018847 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Aug 13 08:40:44.018896 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Aug 13 08:40:44.018951 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Aug 13 08:40:44.019003 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Aug 13 08:40:44.019052 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Aug 13 08:40:44.019103 kernel: pci 0000:01:00.0: PME# supported from D3cold Aug 13 08:40:44.019155 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Aug 13 08:40:44.019206 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Aug 13 08:40:44.019261 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Aug 13 08:40:44.019311 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Aug 13 08:40:44.019362 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Aug 13 08:40:44.019411 kernel: pci 0000:01:00.1: PME# supported from D3cold Aug 13 08:40:44.019461 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Aug 13 08:40:44.019513 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Aug 13 08:40:44.019567 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 08:40:44.019615 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Aug 13 08:40:44.019666 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 08:40:44.019716 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Aug 13 08:40:44.019770 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Aug 13 08:40:44.019821 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Aug 13 08:40:44.019873 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Aug 13 08:40:44.019923 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Aug 13 08:40:44.019972 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Aug 13 08:40:44.020023 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.020072 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Aug 13 08:40:44.020121 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Aug 13 08:40:44.020170 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Aug 13 08:40:44.020225 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Aug 13 08:40:44.020279 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Aug 13 08:40:44.020328 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Aug 13 08:40:44.020379 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Aug 13 08:40:44.020427 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Aug 13 08:40:44.020478 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.020527 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Aug 13 08:40:44.020579 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Aug 13 08:40:44.020630 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Aug 13 08:40:44.020681 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Aug 13 08:40:44.020738 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Aug 13 08:40:44.020789 kernel: pci 0000:06:00.0: enabling Extended Tags Aug 13 08:40:44.020840 kernel: pci 0000:06:00.0: supports D1 D2 Aug 13 08:40:44.020891 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 08:40:44.020941 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Aug 13 08:40:44.020992 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Aug 13 08:40:44.021042 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Aug 13 08:40:44.021094 kernel: pci_bus 0000:07: extended config space not accessible Aug 13 08:40:44.021151 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Aug 13 08:40:44.021205 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Aug 13 08:40:44.021256 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Aug 13 08:40:44.021308 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Aug 13 08:40:44.021359 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 08:40:44.021414 kernel: pci 0000:07:00.0: supports D1 D2 Aug 13 08:40:44.021466 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 08:40:44.021517 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Aug 13 08:40:44.021570 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Aug 13 08:40:44.021620 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Aug 13 08:40:44.021628 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Aug 13 08:40:44.021635 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Aug 13 08:40:44.021642 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Aug 13 08:40:44.021648 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Aug 13 08:40:44.021654 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Aug 13 08:40:44.021660 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Aug 13 08:40:44.021665 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Aug 13 08:40:44.021671 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Aug 13 08:40:44.021677 kernel: iommu: Default domain type: Translated Aug 13 08:40:44.021683 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 08:40:44.021689 kernel: PCI: Using ACPI for IRQ routing Aug 13 08:40:44.021696 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 08:40:44.021702 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Aug 13 08:40:44.021707 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] Aug 13 08:40:44.021713 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Aug 13 08:40:44.021718 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Aug 13 08:40:44.021724 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Aug 13 08:40:44.021729 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Aug 13 08:40:44.021781 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Aug 13 08:40:44.021832 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Aug 13 08:40:44.021886 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 08:40:44.021895 kernel: vgaarb: loaded Aug 13 08:40:44.021901 kernel: clocksource: Switched to clocksource tsc-early Aug 13 08:40:44.021907 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 08:40:44.021912 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 08:40:44.021918 kernel: pnp: PnP ACPI init Aug 13 08:40:44.021969 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Aug 13 08:40:44.022017 kernel: pnp 00:02: [dma 0 disabled] Aug 13 08:40:44.022070 kernel: pnp 00:03: [dma 0 disabled] Aug 13 08:40:44.022117 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Aug 13 08:40:44.022163 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Aug 13 08:40:44.022211 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Aug 13 08:40:44.022256 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Aug 13 08:40:44.022300 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Aug 13 08:40:44.022347 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Aug 13 08:40:44.022391 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Aug 13 08:40:44.022436 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Aug 13 08:40:44.022480 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Aug 13 08:40:44.022524 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Aug 13 08:40:44.022576 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Aug 13 08:40:44.022625 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Aug 13 08:40:44.022672 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Aug 13 08:40:44.022716 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Aug 13 08:40:44.022760 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Aug 13 08:40:44.022804 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Aug 13 08:40:44.022848 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Aug 13 08:40:44.022895 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Aug 13 08:40:44.022903 kernel: pnp: PnP ACPI: found 9 devices Aug 13 08:40:44.022911 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 08:40:44.022917 kernel: NET: Registered PF_INET protocol family Aug 13 08:40:44.022923 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 08:40:44.022929 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 08:40:44.022935 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 08:40:44.022941 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 08:40:44.022946 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 08:40:44.022952 kernel: TCP: Hash tables configured (established 262144 bind 65536) Aug 13 08:40:44.022958 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 08:40:44.022965 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 08:40:44.022970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 08:40:44.022976 kernel: NET: Registered PF_XDP protocol family Aug 13 08:40:44.023026 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Aug 13 08:40:44.023075 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Aug 13 08:40:44.023124 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Aug 13 08:40:44.023176 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Aug 13 08:40:44.023226 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Aug 13 08:40:44.023278 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Aug 13 08:40:44.023328 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Aug 13 08:40:44.023378 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 08:40:44.023428 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Aug 13 08:40:44.023477 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 08:40:44.023525 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Aug 13 08:40:44.023580 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Aug 13 08:40:44.023629 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Aug 13 08:40:44.023677 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Aug 13 08:40:44.023725 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Aug 13 08:40:44.023774 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Aug 13 08:40:44.023822 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Aug 13 08:40:44.023871 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Aug 13 08:40:44.023922 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Aug 13 08:40:44.023972 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Aug 13 08:40:44.024022 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Aug 13 08:40:44.024070 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Aug 13 08:40:44.024119 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Aug 13 08:40:44.024167 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Aug 13 08:40:44.024213 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Aug 13 08:40:44.024256 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 08:40:44.024300 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 08:40:44.024345 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 08:40:44.024388 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Aug 13 08:40:44.024429 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Aug 13 08:40:44.024478 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Aug 13 08:40:44.024523 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 08:40:44.024577 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Aug 13 08:40:44.024624 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Aug 13 08:40:44.024673 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Aug 13 08:40:44.024717 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Aug 13 08:40:44.024765 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Aug 13 08:40:44.024810 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Aug 13 08:40:44.024856 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Aug 13 08:40:44.024903 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Aug 13 08:40:44.024913 kernel: PCI: CLS 64 bytes, default 64 Aug 13 08:40:44.024919 kernel: DMAR: No ATSR found Aug 13 08:40:44.024925 kernel: DMAR: No SATC found Aug 13 08:40:44.024931 kernel: DMAR: dmar0: Using Queued invalidation Aug 13 08:40:44.024980 kernel: pci 0000:00:00.0: Adding to iommu group 0 Aug 13 08:40:44.025029 kernel: pci 0000:00:01.0: Adding to iommu group 1 Aug 13 08:40:44.025078 kernel: pci 0000:00:08.0: Adding to iommu group 2 Aug 13 08:40:44.025127 kernel: pci 0000:00:12.0: Adding to iommu group 3 Aug 13 08:40:44.025177 kernel: pci 0000:00:14.0: Adding to iommu group 4 Aug 13 08:40:44.025227 kernel: pci 0000:00:14.2: Adding to iommu group 4 Aug 13 08:40:44.025275 kernel: pci 0000:00:15.0: Adding to iommu group 5 Aug 13 08:40:44.025323 kernel: pci 0000:00:15.1: Adding to iommu group 5 Aug 13 08:40:44.025372 kernel: pci 0000:00:16.0: Adding to iommu group 6 Aug 13 08:40:44.025421 kernel: pci 0000:00:16.1: Adding to iommu group 6 Aug 13 08:40:44.025470 kernel: pci 0000:00:16.4: Adding to iommu group 6 Aug 13 08:40:44.025518 kernel: pci 0000:00:17.0: Adding to iommu group 7 Aug 13 08:40:44.025570 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Aug 13 08:40:44.025621 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Aug 13 08:40:44.025670 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Aug 13 08:40:44.025718 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Aug 13 08:40:44.025766 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Aug 13 08:40:44.025814 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Aug 13 08:40:44.025863 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Aug 13 08:40:44.025911 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Aug 13 08:40:44.025960 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Aug 13 08:40:44.026012 kernel: pci 0000:01:00.0: Adding to iommu group 1 Aug 13 08:40:44.026062 kernel: pci 0000:01:00.1: Adding to iommu group 1 Aug 13 08:40:44.026112 kernel: pci 0000:03:00.0: Adding to iommu group 15 Aug 13 08:40:44.026163 kernel: pci 0000:04:00.0: Adding to iommu group 16 Aug 13 08:40:44.026212 kernel: pci 0000:06:00.0: Adding to iommu group 17 Aug 13 08:40:44.026263 kernel: pci 0000:07:00.0: Adding to iommu group 17 Aug 13 08:40:44.026272 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Aug 13 08:40:44.026278 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 08:40:44.026286 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Aug 13 08:40:44.026292 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Aug 13 08:40:44.026298 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Aug 13 08:40:44.026304 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Aug 13 08:40:44.026309 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Aug 13 08:40:44.026361 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Aug 13 08:40:44.026370 kernel: Initialise system trusted keyrings Aug 13 08:40:44.026375 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Aug 13 08:40:44.026383 kernel: Key type asymmetric registered Aug 13 08:40:44.026388 kernel: Asymmetric key parser 'x509' registered Aug 13 08:40:44.026394 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 08:40:44.026400 kernel: io scheduler mq-deadline registered Aug 13 08:40:44.026406 kernel: io scheduler kyber registered Aug 13 08:40:44.026411 kernel: io scheduler bfq registered Aug 13 08:40:44.026459 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Aug 13 08:40:44.026509 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Aug 13 08:40:44.026562 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Aug 13 08:40:44.026614 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Aug 13 08:40:44.026662 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Aug 13 08:40:44.026712 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Aug 13 08:40:44.026765 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Aug 13 08:40:44.026773 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Aug 13 08:40:44.026779 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Aug 13 08:40:44.026785 kernel: pstore: Using crash dump compression: deflate Aug 13 08:40:44.026793 kernel: pstore: Registered erst as persistent store backend Aug 13 08:40:44.026799 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 08:40:44.026805 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 08:40:44.026811 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 08:40:44.026816 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 08:40:44.026822 kernel: hpet_acpi_add: no address or irqs in _CRS Aug 13 08:40:44.026875 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Aug 13 08:40:44.026884 kernel: i8042: PNP: No PS/2 controller found. Aug 13 08:40:44.026928 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Aug 13 08:40:44.026977 kernel: rtc_cmos rtc_cmos: registered as rtc0 Aug 13 08:40:44.027022 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-08-13T08:40:42 UTC (1755074442) Aug 13 08:40:44.027066 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Aug 13 08:40:44.027074 kernel: intel_pstate: Intel P-state driver initializing Aug 13 08:40:44.027080 kernel: intel_pstate: Disabling energy efficiency optimization Aug 13 08:40:44.027086 kernel: intel_pstate: HWP enabled Aug 13 08:40:44.027092 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Aug 13 08:40:44.027098 kernel: vesafb: scrolling: redraw Aug 13 08:40:44.027105 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Aug 13 08:40:44.027111 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000005a2cb4df, using 768k, total 768k Aug 13 08:40:44.027117 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 08:40:44.027123 kernel: fb0: VESA VGA frame buffer device Aug 13 08:40:44.027128 kernel: NET: Registered PF_INET6 protocol family Aug 13 08:40:44.027134 kernel: Segment Routing with IPv6 Aug 13 08:40:44.027140 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 08:40:44.027146 kernel: NET: Registered PF_PACKET protocol family Aug 13 08:40:44.027151 kernel: Key type dns_resolver registered Aug 13 08:40:44.027158 kernel: microcode: Current revision: 0x00000102 Aug 13 08:40:44.027164 kernel: microcode: Microcode Update Driver: v2.2. Aug 13 08:40:44.027169 kernel: IPI shorthand broadcast: enabled Aug 13 08:40:44.027175 kernel: sched_clock: Marking stable (1561048071, 1378409101)->(4401160932, -1461703760) Aug 13 08:40:44.027181 kernel: registered taskstats version 1 Aug 13 08:40:44.027187 kernel: Loading compiled-in X.509 certificates Aug 13 08:40:44.027192 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 08:40:44.027198 kernel: Key type .fscrypt registered Aug 13 08:40:44.027204 kernel: Key type fscrypt-provisioning registered Aug 13 08:40:44.027210 kernel: ima: Allocated hash algorithm: sha1 Aug 13 08:40:44.027216 kernel: ima: No architecture policies found Aug 13 08:40:44.027222 kernel: clk: Disabling unused clocks Aug 13 08:40:44.027228 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 08:40:44.027233 kernel: Write protecting the kernel read-only data: 36864k Aug 13 08:40:44.027239 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 08:40:44.027245 kernel: Run /init as init process Aug 13 08:40:44.027251 kernel: with arguments: Aug 13 08:40:44.027256 kernel: /init Aug 13 08:40:44.027263 kernel: with environment: Aug 13 08:40:44.027269 kernel: HOME=/ Aug 13 08:40:44.027274 kernel: TERM=linux Aug 13 08:40:44.027280 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 08:40:44.027287 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 08:40:44.027294 systemd[1]: Detected architecture x86-64. Aug 13 08:40:44.027300 systemd[1]: Running in initrd. Aug 13 08:40:44.027307 systemd[1]: No hostname configured, using default hostname. Aug 13 08:40:44.027313 systemd[1]: Hostname set to . Aug 13 08:40:44.027319 systemd[1]: Initializing machine ID from random generator. Aug 13 08:40:44.027325 systemd[1]: Queued start job for default target initrd.target. Aug 13 08:40:44.027331 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 08:40:44.027337 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 08:40:44.027344 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 08:40:44.027350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 08:40:44.027357 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 08:40:44.027363 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 08:40:44.027369 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 08:40:44.027376 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 08:40:44.027382 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Aug 13 08:40:44.027387 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Aug 13 08:40:44.027393 kernel: clocksource: Switched to clocksource tsc Aug 13 08:40:44.027400 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 08:40:44.027406 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 08:40:44.027412 systemd[1]: Reached target paths.target - Path Units. Aug 13 08:40:44.027419 systemd[1]: Reached target slices.target - Slice Units. Aug 13 08:40:44.027425 systemd[1]: Reached target swap.target - Swaps. Aug 13 08:40:44.027431 systemd[1]: Reached target timers.target - Timer Units. Aug 13 08:40:44.027437 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 08:40:44.027443 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 08:40:44.027449 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 08:40:44.027456 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 08:40:44.027462 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 08:40:44.027468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 08:40:44.027474 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 08:40:44.027480 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 08:40:44.027486 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 08:40:44.027492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 08:40:44.027498 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 08:40:44.027505 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 08:40:44.027511 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 08:40:44.027527 systemd-journald[267]: Collecting audit messages is disabled. Aug 13 08:40:44.027543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 08:40:44.027551 systemd-journald[267]: Journal started Aug 13 08:40:44.027565 systemd-journald[267]: Runtime Journal (/run/log/journal/5e8ab259764f4c4a890329a61880395b) is 8.0M, max 639.9M, 631.9M free. Aug 13 08:40:44.050718 systemd-modules-load[268]: Inserted module 'overlay' Aug 13 08:40:44.072594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 08:40:44.101510 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 08:40:44.162487 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 08:40:44.162500 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 08:40:44.162509 kernel: Bridge firewalling registered Aug 13 08:40:44.157726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 08:40:44.162464 systemd-modules-load[268]: Inserted module 'br_netfilter' Aug 13 08:40:44.181912 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 08:40:44.202924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 08:40:44.221853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:40:44.244865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 08:40:44.247852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 08:40:44.293911 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 08:40:44.296613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 08:40:44.301492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 08:40:44.301674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 08:40:44.302396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 08:40:44.304959 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 08:40:44.305805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 08:40:44.307114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 08:40:44.310778 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 08:40:44.323202 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 08:40:44.323842 systemd-resolved[301]: Positive Trust Anchors: Aug 13 08:40:44.323847 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 08:40:44.323873 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 08:40:44.325589 systemd-resolved[301]: Defaulting to hostname 'linux'. Aug 13 08:40:44.350785 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 08:40:44.365763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 08:40:44.478214 dracut-cmdline[308]: dracut-dracut-053 Aug 13 08:40:44.485757 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 08:40:44.661577 kernel: SCSI subsystem initialized Aug 13 08:40:44.685572 kernel: Loading iSCSI transport class v2.0-870. Aug 13 08:40:44.708546 kernel: iscsi: registered transport (tcp) Aug 13 08:40:44.742699 kernel: iscsi: registered transport (qla4xxx) Aug 13 08:40:44.742716 kernel: QLogic iSCSI HBA Driver Aug 13 08:40:44.775167 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 08:40:44.792816 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 08:40:44.880814 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 08:40:44.880839 kernel: device-mapper: uevent: version 1.0.3 Aug 13 08:40:44.900513 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 08:40:44.960571 kernel: raid6: avx2x4 gen() 53128 MB/s Aug 13 08:40:44.992570 kernel: raid6: avx2x2 gen() 54055 MB/s Aug 13 08:40:45.028999 kernel: raid6: avx2x1 gen() 45292 MB/s Aug 13 08:40:45.029015 kernel: raid6: using algorithm avx2x2 gen() 54055 MB/s Aug 13 08:40:45.076058 kernel: raid6: .... xor() 31565 MB/s, rmw enabled Aug 13 08:40:45.076075 kernel: raid6: using avx2x2 recovery algorithm Aug 13 08:40:45.117581 kernel: xor: automatically using best checksumming function avx Aug 13 08:40:45.231548 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 08:40:45.237019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 08:40:45.263882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 08:40:45.270566 systemd-udevd[494]: Using default interface naming scheme 'v255'. Aug 13 08:40:45.273095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 08:40:45.295301 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 08:40:45.353269 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Aug 13 08:40:45.371686 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 08:40:45.399924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 08:40:45.463290 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 08:40:45.494586 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 08:40:45.494604 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 08:40:45.505545 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 08:40:45.507630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 08:40:45.541654 kernel: PTP clock support registered Aug 13 08:40:45.541674 kernel: libata version 3.00 loaded. Aug 13 08:40:45.520001 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 08:40:45.620106 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 08:40:45.620122 kernel: ACPI: bus type USB registered Aug 13 08:40:45.620132 kernel: usbcore: registered new interface driver usbfs Aug 13 08:40:45.620140 kernel: usbcore: registered new interface driver hub Aug 13 08:40:45.620148 kernel: usbcore: registered new device driver usb Aug 13 08:40:45.562112 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 08:40:45.987408 kernel: AES CTR mode by8 optimization enabled Aug 13 08:40:45.987425 kernel: ahci 0000:00:17.0: version 3.0 Aug 13 08:40:45.987517 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Aug 13 08:40:45.987526 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Aug 13 08:40:45.987597 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Aug 13 08:40:45.987606 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Aug 13 08:40:45.987671 kernel: igb 0000:03:00.0: added PHC on eth0 Aug 13 08:40:45.987741 kernel: scsi host0: ahci Aug 13 08:40:45.987804 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Aug 13 08:40:45.987867 kernel: scsi host1: ahci Aug 13 08:40:45.987928 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6d:9a:ae Aug 13 08:40:45.987991 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Aug 13 08:40:45.988054 kernel: scsi host2: ahci Aug 13 08:40:45.988116 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Aug 13 08:40:45.988179 kernel: scsi host3: ahci Aug 13 08:40:45.988238 kernel: igb 0000:04:00.0: added PHC on eth1 Aug 13 08:40:45.988303 kernel: scsi host4: ahci Aug 13 08:40:45.988365 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Aug 13 08:40:45.988428 kernel: scsi host5: ahci Aug 13 08:40:45.988487 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6d:9a:af Aug 13 08:40:45.988556 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Aug 13 08:40:45.988620 kernel: scsi host6: ahci Aug 13 08:40:45.988680 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Aug 13 08:40:45.988743 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Aug 13 08:40:45.988751 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Aug 13 08:40:45.988759 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Aug 13 08:40:45.649739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 08:40:46.095646 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Aug 13 08:40:46.095658 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Aug 13 08:40:46.095666 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Aug 13 08:40:46.095674 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Aug 13 08:40:46.095681 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Aug 13 08:40:46.080218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 08:40:46.136635 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Aug 13 08:40:46.132746 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 08:40:46.148589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 08:40:46.148617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 08:40:46.184647 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 08:40:46.195630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 08:40:46.195658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:40:46.216614 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 08:40:46.245684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 08:40:46.245832 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 08:40:46.291964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:40:46.312707 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 08:40:46.352503 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 08:40:46.389606 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Aug 13 08:40:46.412700 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.412741 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Aug 13 08:40:46.422585 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 13 08:40:46.437551 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.452585 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.466597 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.480578 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 13 08:40:46.495585 kernel: ata7: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.509586 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Aug 13 08:40:46.525601 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Aug 13 08:40:46.560608 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Aug 13 08:40:46.560624 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Aug 13 08:40:46.594588 kernel: ata2.00: Features: NCQ-prio Aug 13 08:40:46.594604 kernel: ata1.00: Features: NCQ-prio Aug 13 08:40:46.631575 kernel: ata2.00: configured for UDMA/133 Aug 13 08:40:46.631596 kernel: ata1.00: configured for UDMA/133 Aug 13 08:40:46.644598 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 08:40:46.644703 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Aug 13 08:40:46.661590 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Aug 13 08:40:46.661708 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Aug 13 08:40:46.674981 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Aug 13 08:40:46.709546 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Aug 13 08:40:46.761691 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Aug 13 08:40:46.762690 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Aug 13 08:40:46.789545 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Aug 13 08:40:46.789668 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Aug 13 08:40:46.799851 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Aug 13 08:40:46.799998 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Aug 13 08:40:46.834640 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Aug 13 08:40:46.865954 kernel: hub 1-0:1.0: USB hub found Aug 13 08:40:46.866046 kernel: hub 1-0:1.0: 16 ports detected Aug 13 08:40:46.891737 kernel: hub 2-0:1.0: USB hub found Aug 13 08:40:46.891830 kernel: hub 2-0:1.0: 10 ports detected Aug 13 08:40:46.900545 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 08:40:46.913013 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:46.913029 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Aug 13 08:40:46.917727 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Aug 13 08:40:46.932688 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Aug 13 08:40:46.932770 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 08:40:46.937909 kernel: sd 1:0:0:0: [sdb] Write Protect is off Aug 13 08:40:46.943607 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 08:40:46.952774 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Aug 13 08:40:46.952856 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 08:40:46.952922 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Aug 13 08:40:46.960671 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Aug 13 08:40:46.977859 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 08:40:47.001080 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 08:40:47.001099 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Aug 13 08:40:47.001184 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Aug 13 08:40:47.001590 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Aug 13 08:40:47.015595 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Aug 13 08:40:47.015684 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:47.119983 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Aug 13 08:40:47.127624 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 08:40:47.181007 kernel: GPT:9289727 != 937703087 Aug 13 08:40:47.196007 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 08:40:47.208721 kernel: GPT:9289727 != 937703087 Aug 13 08:40:47.222876 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 08:40:47.236854 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:47.250904 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 08:40:47.251043 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 08:40:47.269628 kernel: hub 1-14:1.0: USB hub found Aug 13 08:40:47.292570 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (551) Aug 13 08:40:47.292623 kernel: hub 1-14:1.0: 4 ports detected Aug 13 08:40:47.295087 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Aug 13 08:40:47.403654 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (561) Aug 13 08:40:47.403671 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Aug 13 08:40:47.403763 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Aug 13 08:40:47.392431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Aug 13 08:40:47.420682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Aug 13 08:40:47.455981 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Aug 13 08:40:47.466706 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Aug 13 08:40:47.506943 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 08:40:47.548672 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:47.548686 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:47.548739 disk-uuid[731]: Primary Header is updated. Aug 13 08:40:47.548739 disk-uuid[731]: Secondary Entries is updated. Aug 13 08:40:47.548739 disk-uuid[731]: Secondary Header is updated. Aug 13 08:40:47.609660 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:47.609671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:47.609678 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:47.609684 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Aug 13 08:40:47.609702 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:47.714552 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 08:40:47.736355 kernel: usbcore: registered new interface driver usbhid Aug 13 08:40:47.736428 kernel: usbhid: USB HID core driver Aug 13 08:40:47.779550 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Aug 13 08:40:47.875905 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Aug 13 08:40:47.876043 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Aug 13 08:40:47.910426 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Aug 13 08:40:48.604039 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:48.624454 disk-uuid[732]: The operation has completed successfully. Aug 13 08:40:48.632741 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:48.657526 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 08:40:48.657641 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 08:40:48.695798 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 08:40:48.720744 sh[750]: Success Aug 13 08:40:48.731596 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 08:40:48.766960 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 08:40:48.779401 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 08:40:48.787165 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 08:40:48.842705 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 08:40:48.842724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 08:40:48.865385 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 08:40:48.885612 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 08:40:48.904700 kernel: BTRFS info (device dm-0): using free space tree Aug 13 08:40:48.942544 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 08:40:48.943665 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 08:40:48.943915 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 08:40:48.965823 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 08:40:48.985108 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 08:40:49.085863 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:40:49.085875 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 08:40:49.085882 kernel: BTRFS info (device sda6): using free space tree Aug 13 08:40:49.085889 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 08:40:49.085898 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 08:40:49.090863 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 08:40:49.122749 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:40:49.128827 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 08:40:49.147731 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 08:40:49.180643 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 08:40:49.205744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 08:40:49.215101 unknown[834]: fetched base config from "system" Aug 13 08:40:49.212793 ignition[834]: Ignition 2.19.0 Aug 13 08:40:49.215105 unknown[834]: fetched user config from "system" Aug 13 08:40:49.212797 ignition[834]: Stage: fetch-offline Aug 13 08:40:49.216039 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 08:40:49.212819 ignition[834]: no configs at "/usr/lib/ignition/base.d" Aug 13 08:40:49.217752 systemd-networkd[933]: lo: Link UP Aug 13 08:40:49.212825 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:40:49.217755 systemd-networkd[933]: lo: Gained carrier Aug 13 08:40:49.212899 ignition[834]: parsed url from cmdline: "" Aug 13 08:40:49.220279 systemd-networkd[933]: Enumeration completed Aug 13 08:40:49.212901 ignition[834]: no config URL provided Aug 13 08:40:49.221018 systemd-networkd[933]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 08:40:49.212903 ignition[834]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 08:40:49.236938 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 08:40:49.212940 ignition[834]: parsing config with SHA512: ee95c5714ba2ddc2801d6533abf8ae97a0962cd259d96aadfe37b3ca449ff4283bdffdbcad1bfaec7ca1e48a1b93a6e44c364f8ed6b0ce8bbdee2f39b110aee0 Aug 13 08:40:49.251331 systemd-networkd[933]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 08:40:49.215317 ignition[834]: fetch-offline: fetch-offline passed Aug 13 08:40:49.255979 systemd[1]: Reached target network.target - Network. Aug 13 08:40:49.215319 ignition[834]: POST message to Packet Timeline Aug 13 08:40:49.275774 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 08:40:49.215322 ignition[834]: POST Status error: resource requires networking Aug 13 08:40:49.279907 systemd-networkd[933]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 08:40:49.215358 ignition[834]: Ignition finished successfully Aug 13 08:40:49.285689 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 08:40:49.300756 ignition[946]: Ignition 2.19.0 Aug 13 08:40:49.486709 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Aug 13 08:40:49.483076 systemd-networkd[933]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 08:40:49.300760 ignition[946]: Stage: kargs Aug 13 08:40:49.300867 ignition[946]: no configs at "/usr/lib/ignition/base.d" Aug 13 08:40:49.300873 ignition[946]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:40:49.301388 ignition[946]: kargs: kargs passed Aug 13 08:40:49.301391 ignition[946]: POST message to Packet Timeline Aug 13 08:40:49.301400 ignition[946]: GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:40:49.301877 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56760->[::1]:53: read: connection refused Aug 13 08:40:49.502027 ignition[946]: GET https://metadata.packet.net/metadata: attempt #2 Aug 13 08:40:49.502601 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44524->[::1]:53: read: connection refused Aug 13 08:40:49.763579 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Aug 13 08:40:49.765093 systemd-networkd[933]: eno1: Link UP Aug 13 08:40:49.765235 systemd-networkd[933]: eno2: Link UP Aug 13 08:40:49.765365 systemd-networkd[933]: enp1s0f0np0: Link UP Aug 13 08:40:49.765520 systemd-networkd[933]: enp1s0f0np0: Gained carrier Aug 13 08:40:49.775735 systemd-networkd[933]: enp1s0f1np1: Link UP Aug 13 08:40:49.809722 systemd-networkd[933]: enp1s0f0np0: DHCPv4 address 147.75.71.95/31, gateway 147.75.71.94 acquired from 145.40.83.140 Aug 13 08:40:49.903536 ignition[946]: GET https://metadata.packet.net/metadata: attempt #3 Aug 13 08:40:49.904735 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46302->[::1]:53: read: connection refused Aug 13 08:40:50.522402 systemd-networkd[933]: enp1s0f1np1: Gained carrier Aug 13 08:40:50.705260 ignition[946]: GET https://metadata.packet.net/metadata: attempt #4 Aug 13 08:40:50.706409 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41540->[::1]:53: read: connection refused Aug 13 08:40:51.354183 systemd-networkd[933]: enp1s0f0np0: Gained IPv6LL Aug 13 08:40:51.802187 systemd-networkd[933]: enp1s0f1np1: Gained IPv6LL Aug 13 08:40:52.307658 ignition[946]: GET https://metadata.packet.net/metadata: attempt #5 Aug 13 08:40:52.308817 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46894->[::1]:53: read: connection refused Aug 13 08:40:55.511255 ignition[946]: GET https://metadata.packet.net/metadata: attempt #6 Aug 13 08:40:56.543582 ignition[946]: GET result: OK Aug 13 08:40:57.116395 ignition[946]: Ignition finished successfully Aug 13 08:40:57.121536 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 08:40:57.151856 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 08:40:57.158089 ignition[969]: Ignition 2.19.0 Aug 13 08:40:57.158094 ignition[969]: Stage: disks Aug 13 08:40:57.158206 ignition[969]: no configs at "/usr/lib/ignition/base.d" Aug 13 08:40:57.158213 ignition[969]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:40:57.158759 ignition[969]: disks: disks passed Aug 13 08:40:57.158761 ignition[969]: POST message to Packet Timeline Aug 13 08:40:57.158770 ignition[969]: GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:40:58.189622 ignition[969]: GET result: OK Aug 13 08:40:58.675494 ignition[969]: Ignition finished successfully Aug 13 08:40:58.676798 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 08:40:58.692973 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 08:40:58.711771 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 08:40:58.732832 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 08:40:58.753961 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 08:40:58.773845 systemd[1]: Reached target basic.target - Basic System. Aug 13 08:40:58.801816 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 08:40:58.834404 systemd-fsck[988]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 08:40:58.844132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 08:40:58.870818 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 08:40:58.969589 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 08:40:58.970056 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 08:40:58.980035 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 08:40:59.000681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 08:40:59.035550 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (998) Aug 13 08:40:59.065857 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:40:59.065874 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 08:40:59.084591 kernel: BTRFS info (device sda6): using free space tree Aug 13 08:40:59.098626 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 08:40:59.145781 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 08:40:59.145793 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 08:40:59.113157 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 08:40:59.166826 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Aug 13 08:40:59.186630 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 08:40:59.186649 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 08:40:59.234772 coreos-metadata[1015]: Aug 13 08:40:59.214 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 08:40:59.255740 coreos-metadata[1016]: Aug 13 08:40:59.214 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 08:40:59.198588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 08:40:59.224834 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 08:40:59.251756 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 08:40:59.307658 initrd-setup-root[1030]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 08:40:59.318669 initrd-setup-root[1037]: cut: /sysroot/etc/group: No such file or directory Aug 13 08:40:59.328655 initrd-setup-root[1044]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 08:40:59.338812 initrd-setup-root[1051]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 08:40:59.337437 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 08:40:59.350857 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 08:40:59.358348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 08:40:59.391870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 08:40:59.422813 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:40:59.433750 ignition[1118]: INFO : Ignition 2.19.0 Aug 13 08:40:59.433750 ignition[1118]: INFO : Stage: mount Aug 13 08:40:59.448780 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 08:40:59.448780 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:40:59.448780 ignition[1118]: INFO : mount: mount passed Aug 13 08:40:59.448780 ignition[1118]: INFO : POST message to Packet Timeline Aug 13 08:40:59.448780 ignition[1118]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:40:59.442841 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 08:41:00.282407 coreos-metadata[1016]: Aug 13 08:41:00.282 INFO Fetch successful Aug 13 08:41:00.362800 systemd[1]: flatcar-static-network.service: Deactivated successfully. Aug 13 08:41:00.362864 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Aug 13 08:41:00.509317 ignition[1118]: INFO : GET result: OK Aug 13 08:41:00.585437 coreos-metadata[1015]: Aug 13 08:41:00.585 INFO Fetch successful Aug 13 08:41:00.651223 coreos-metadata[1015]: Aug 13 08:41:00.651 INFO wrote hostname ci-4081.3.5-a-711ae8cc9f to /sysroot/etc/hostname Aug 13 08:41:00.652866 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 08:41:01.704703 ignition[1118]: INFO : Ignition finished successfully Aug 13 08:41:01.708144 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 08:41:01.743744 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 08:41:01.753835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 08:41:01.826448 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1144) Aug 13 08:41:01.826475 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:41:01.846777 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 08:41:01.865030 kernel: BTRFS info (device sda6): using free space tree Aug 13 08:41:01.904516 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 08:41:01.904535 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 08:41:01.918808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 08:41:01.943375 ignition[1161]: INFO : Ignition 2.19.0 Aug 13 08:41:01.943375 ignition[1161]: INFO : Stage: files Aug 13 08:41:01.957852 ignition[1161]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 08:41:01.957852 ignition[1161]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:41:01.957852 ignition[1161]: DEBUG : files: compiled without relabeling support, skipping Aug 13 08:41:01.957852 ignition[1161]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 08:41:01.957852 ignition[1161]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 08:41:01.947501 unknown[1161]: wrote ssh authorized keys file for user: core Aug 13 08:41:02.092780 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 08:41:02.216293 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 08:41:02.821035 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 08:41:03.171388 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 08:41:03.171388 ignition[1161]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: files passed Aug 13 08:41:03.201776 ignition[1161]: INFO : POST message to Packet Timeline Aug 13 08:41:03.201776 ignition[1161]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:41:04.508280 ignition[1161]: INFO : GET result: OK Aug 13 08:41:05.194396 ignition[1161]: INFO : Ignition finished successfully Aug 13 08:41:05.198104 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 08:41:05.227831 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 08:41:05.228235 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 08:41:05.247052 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 08:41:05.247122 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 08:41:05.286361 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 08:41:05.300048 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 08:41:05.342730 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 08:41:05.342730 initrd-setup-root-after-ignition[1201]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 08:41:05.337656 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 08:41:05.394856 initrd-setup-root-after-ignition[1205]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 08:41:05.415755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 08:41:05.415815 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 08:41:05.434995 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 08:41:05.455717 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 08:41:05.472985 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 08:41:05.482928 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 08:41:05.565232 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 08:41:05.593079 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 08:41:05.621776 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 08:41:05.633059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 08:41:05.654240 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 08:41:05.673102 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 08:41:05.673505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 08:41:05.701382 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 08:41:05.723176 systemd[1]: Stopped target basic.target - Basic System. Aug 13 08:41:05.741177 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 08:41:05.759177 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 08:41:05.780256 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 08:41:05.801158 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 08:41:05.821153 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 08:41:05.842201 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 08:41:05.863297 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 08:41:05.884172 systemd[1]: Stopped target swap.target - Swaps. Aug 13 08:41:05.902047 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 08:41:05.902460 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 08:41:05.928253 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 08:41:05.948284 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 08:41:05.969020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 08:41:05.969461 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 08:41:05.991449 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 08:41:05.991884 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 08:41:06.024246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 08:41:06.024733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 08:41:06.045378 systemd[1]: Stopped target paths.target - Path Units. Aug 13 08:41:06.064139 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 08:41:06.064589 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 08:41:06.085174 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 08:41:06.105163 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 08:41:06.123040 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 08:41:06.123355 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 08:41:06.143201 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 08:41:06.143511 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 08:41:06.166229 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 08:41:06.282756 ignition[1225]: INFO : Ignition 2.19.0 Aug 13 08:41:06.282756 ignition[1225]: INFO : Stage: umount Aug 13 08:41:06.282756 ignition[1225]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 08:41:06.282756 ignition[1225]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:41:06.282756 ignition[1225]: INFO : umount: umount passed Aug 13 08:41:06.282756 ignition[1225]: INFO : POST message to Packet Timeline Aug 13 08:41:06.282756 ignition[1225]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:41:06.166665 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 08:41:06.186254 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 08:41:06.186663 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 08:41:06.204193 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 08:41:06.204601 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 08:41:06.234684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 08:41:06.249656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 08:41:06.249778 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 08:41:06.277758 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 08:41:06.282801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 08:41:06.282960 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 08:41:06.292049 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 08:41:06.292227 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 08:41:06.342482 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 08:41:06.342922 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 08:41:06.342982 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 08:41:06.365228 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 08:41:06.365330 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 08:41:07.324461 ignition[1225]: INFO : GET result: OK Aug 13 08:41:07.741035 ignition[1225]: INFO : Ignition finished successfully Aug 13 08:41:07.742568 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 08:41:07.742732 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 08:41:07.762868 systemd[1]: Stopped target network.target - Network. Aug 13 08:41:07.778822 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 08:41:07.779036 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 08:41:07.796933 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 08:41:07.797094 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 08:41:07.814919 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 08:41:07.815075 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 08:41:07.832951 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 08:41:07.833117 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 08:41:07.850958 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 08:41:07.851130 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 08:41:07.870279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 08:41:07.885702 systemd-networkd[933]: enp1s0f0np0: DHCPv6 lease lost Aug 13 08:41:07.889075 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 08:41:07.893693 systemd-networkd[933]: enp1s0f1np1: DHCPv6 lease lost Aug 13 08:41:07.907731 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 08:41:07.908021 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 08:41:07.927838 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 08:41:07.928194 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 08:41:07.948703 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 08:41:07.948838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 08:41:07.985750 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 08:41:08.007708 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 08:41:08.007750 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 08:41:08.026839 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 08:41:08.026933 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 08:41:08.046946 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 08:41:08.047116 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 08:41:08.065940 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 08:41:08.066110 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 08:41:08.086159 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 08:41:08.107800 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 08:41:08.108206 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 08:41:08.143644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 08:41:08.143800 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 08:41:08.148079 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 08:41:08.148189 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 08:41:08.167172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 08:41:08.167335 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 08:41:08.205122 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 08:41:08.205411 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 08:41:08.244741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 08:41:08.245010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 08:41:08.300857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 08:41:08.334618 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 08:41:08.334663 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 08:41:08.356747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 08:41:08.356910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:41:08.379039 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 08:41:08.579764 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Aug 13 08:41:08.379371 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 08:41:08.411198 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 08:41:08.411480 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 08:41:08.430835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 08:41:08.468065 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 08:41:08.509870 systemd[1]: Switching root. Aug 13 08:41:08.632771 systemd-journald[267]: Journal stopped Aug 13 08:40:44.014012 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 08:40:44.014026 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 08:40:44.014033 kernel: BIOS-provided physical RAM map: Aug 13 08:40:44.014037 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Aug 13 08:40:44.014041 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Aug 13 08:40:44.014045 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Aug 13 08:40:44.014050 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Aug 13 08:40:44.014054 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Aug 13 08:40:44.014058 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable Aug 13 08:40:44.014062 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS Aug 13 08:40:44.014066 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved Aug 13 08:40:44.014071 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable Aug 13 08:40:44.014076 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Aug 13 08:40:44.014080 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Aug 13 08:40:44.014085 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Aug 13 08:40:44.014090 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Aug 13 08:40:44.014095 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Aug 13 08:40:44.014100 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Aug 13 08:40:44.014105 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 13 08:40:44.014109 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Aug 13 08:40:44.014114 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Aug 13 08:40:44.014119 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 08:40:44.014123 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Aug 13 08:40:44.014128 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Aug 13 08:40:44.014133 kernel: NX (Execute Disable) protection: active Aug 13 08:40:44.014137 kernel: APIC: Static calls initialized Aug 13 08:40:44.014142 kernel: SMBIOS 3.2.1 present. Aug 13 08:40:44.014147 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 2.6 12/03/2024 Aug 13 08:40:44.014152 kernel: tsc: Detected 3400.000 MHz processor Aug 13 08:40:44.014157 kernel: tsc: Detected 3399.906 MHz TSC Aug 13 08:40:44.014162 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 08:40:44.014167 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 08:40:44.014172 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Aug 13 08:40:44.014177 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Aug 13 08:40:44.014182 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 08:40:44.014186 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Aug 13 08:40:44.014191 kernel: Using GB pages for direct mapping Aug 13 08:40:44.014197 kernel: ACPI: Early table checksum verification disabled Aug 13 08:40:44.014202 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Aug 13 08:40:44.014207 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Aug 13 08:40:44.014213 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Aug 13 08:40:44.014218 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Aug 13 08:40:44.014223 kernel: ACPI: FACS 0x000000008C66DF80 000040 Aug 13 08:40:44.014229 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Aug 13 08:40:44.014234 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Aug 13 08:40:44.014240 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Aug 13 08:40:44.014245 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Aug 13 08:40:44.014250 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Aug 13 08:40:44.014255 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Aug 13 08:40:44.014260 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Aug 13 08:40:44.014265 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Aug 13 08:40:44.014271 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014276 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Aug 13 08:40:44.014281 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Aug 13 08:40:44.014286 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014291 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014296 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Aug 13 08:40:44.014301 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Aug 13 08:40:44.014306 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014311 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Aug 13 08:40:44.014317 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Aug 13 08:40:44.014322 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Aug 13 08:40:44.014327 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Aug 13 08:40:44.014332 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Aug 13 08:40:44.014337 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Aug 13 08:40:44.014342 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Aug 13 08:40:44.014347 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Aug 13 08:40:44.014352 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Aug 13 08:40:44.014358 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Aug 13 08:40:44.014363 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Aug 13 08:40:44.014368 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Aug 13 08:40:44.014373 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Aug 13 08:40:44.014378 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Aug 13 08:40:44.014383 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Aug 13 08:40:44.014388 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Aug 13 08:40:44.014393 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Aug 13 08:40:44.014398 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Aug 13 08:40:44.014404 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Aug 13 08:40:44.014409 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Aug 13 08:40:44.014414 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Aug 13 08:40:44.014419 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Aug 13 08:40:44.014424 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Aug 13 08:40:44.014429 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Aug 13 08:40:44.014434 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Aug 13 08:40:44.014439 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Aug 13 08:40:44.014444 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Aug 13 08:40:44.014450 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Aug 13 08:40:44.014455 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Aug 13 08:40:44.014460 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Aug 13 08:40:44.014465 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Aug 13 08:40:44.014470 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Aug 13 08:40:44.014475 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Aug 13 08:40:44.014480 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Aug 13 08:40:44.014485 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Aug 13 08:40:44.014490 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Aug 13 08:40:44.014496 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Aug 13 08:40:44.014501 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Aug 13 08:40:44.014506 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Aug 13 08:40:44.014511 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Aug 13 08:40:44.014516 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Aug 13 08:40:44.014521 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Aug 13 08:40:44.014526 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Aug 13 08:40:44.014531 kernel: No NUMA configuration found Aug 13 08:40:44.014536 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Aug 13 08:40:44.014547 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Aug 13 08:40:44.014553 kernel: Zone ranges: Aug 13 08:40:44.014558 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 08:40:44.014563 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 08:40:44.014569 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Aug 13 08:40:44.014574 kernel: Movable zone start for each node Aug 13 08:40:44.014579 kernel: Early memory node ranges Aug 13 08:40:44.014584 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Aug 13 08:40:44.014589 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Aug 13 08:40:44.014594 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] Aug 13 08:40:44.014600 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] Aug 13 08:40:44.014605 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Aug 13 08:40:44.014610 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Aug 13 08:40:44.014615 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Aug 13 08:40:44.014624 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Aug 13 08:40:44.014630 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 08:40:44.014635 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Aug 13 08:40:44.014641 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Aug 13 08:40:44.014647 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Aug 13 08:40:44.014652 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Aug 13 08:40:44.014658 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Aug 13 08:40:44.014663 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Aug 13 08:40:44.014669 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Aug 13 08:40:44.014674 kernel: ACPI: PM-Timer IO Port: 0x1808 Aug 13 08:40:44.014680 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 08:40:44.014685 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 08:40:44.014690 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 08:40:44.014696 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 08:40:44.014702 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 08:40:44.014707 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 08:40:44.014713 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 08:40:44.014718 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 08:40:44.014723 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 08:40:44.014729 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 08:40:44.014734 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 08:40:44.014739 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 08:40:44.014745 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 08:40:44.014751 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 08:40:44.014756 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 08:40:44.014762 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 08:40:44.014767 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Aug 13 08:40:44.014772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 08:40:44.014778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 08:40:44.014783 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 08:40:44.014789 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 08:40:44.014794 kernel: TSC deadline timer available Aug 13 08:40:44.014801 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Aug 13 08:40:44.014806 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Aug 13 08:40:44.014812 kernel: Booting paravirtualized kernel on bare hardware Aug 13 08:40:44.014817 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 08:40:44.014823 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Aug 13 08:40:44.014828 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Aug 13 08:40:44.014834 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Aug 13 08:40:44.014839 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 08:40:44.014846 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 08:40:44.014851 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 08:40:44.014857 kernel: random: crng init done Aug 13 08:40:44.014862 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Aug 13 08:40:44.014867 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Aug 13 08:40:44.014873 kernel: Fallback order for Node 0: 0 Aug 13 08:40:44.014878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Aug 13 08:40:44.014884 kernel: Policy zone: Normal Aug 13 08:40:44.014890 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 08:40:44.014895 kernel: software IO TLB: area num 16. Aug 13 08:40:44.014901 kernel: Memory: 32720308K/33452984K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 732416K reserved, 0K cma-reserved) Aug 13 08:40:44.014906 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 08:40:44.014912 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 08:40:44.014917 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 08:40:44.014923 kernel: Dynamic Preempt: voluntary Aug 13 08:40:44.014928 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 08:40:44.014934 kernel: rcu: RCU event tracing is enabled. Aug 13 08:40:44.014941 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 08:40:44.014946 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 08:40:44.014952 kernel: Rude variant of Tasks RCU enabled. Aug 13 08:40:44.014957 kernel: Tracing variant of Tasks RCU enabled. Aug 13 08:40:44.014963 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 08:40:44.014968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 08:40:44.014973 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Aug 13 08:40:44.014979 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 08:40:44.014984 kernel: Console: colour dummy device 80x25 Aug 13 08:40:44.014990 kernel: printk: console [tty0] enabled Aug 13 08:40:44.014996 kernel: printk: console [ttyS1] enabled Aug 13 08:40:44.015001 kernel: ACPI: Core revision 20230628 Aug 13 08:40:44.015007 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Aug 13 08:40:44.015013 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 08:40:44.015018 kernel: DMAR: Host address width 39 Aug 13 08:40:44.015023 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Aug 13 08:40:44.015029 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Aug 13 08:40:44.015034 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Aug 13 08:40:44.015040 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Aug 13 08:40:44.015046 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Aug 13 08:40:44.015051 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Aug 13 08:40:44.015057 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Aug 13 08:40:44.015062 kernel: x2apic enabled Aug 13 08:40:44.015068 kernel: APIC: Switched APIC routing to: cluster x2apic Aug 13 08:40:44.015073 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Aug 13 08:40:44.015079 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Aug 13 08:40:44.015084 kernel: CPU0: Thermal monitoring enabled (TM1) Aug 13 08:40:44.015089 kernel: process: using mwait in idle threads Aug 13 08:40:44.015096 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 08:40:44.015101 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 08:40:44.015106 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 08:40:44.015112 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 08:40:44.015117 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 08:40:44.015122 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 08:40:44.015128 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 08:40:44.015133 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 08:40:44.015138 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 08:40:44.015143 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 08:40:44.015149 kernel: TAA: Mitigation: TSX disabled Aug 13 08:40:44.015155 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Aug 13 08:40:44.015160 kernel: SRBDS: Mitigation: Microcode Aug 13 08:40:44.015166 kernel: GDS: Mitigation: Microcode Aug 13 08:40:44.015171 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 08:40:44.015177 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 08:40:44.015182 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 08:40:44.015187 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 08:40:44.015193 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 13 08:40:44.015198 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 13 08:40:44.015203 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 08:40:44.015209 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 13 08:40:44.015215 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 13 08:40:44.015220 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Aug 13 08:40:44.015226 kernel: Freeing SMP alternatives memory: 32K Aug 13 08:40:44.015231 kernel: pid_max: default: 32768 minimum: 301 Aug 13 08:40:44.015236 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 08:40:44.015242 kernel: landlock: Up and running. Aug 13 08:40:44.015247 kernel: SELinux: Initializing. Aug 13 08:40:44.015252 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 08:40:44.015257 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 08:40:44.015263 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 08:40:44.015268 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 08:40:44.015275 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 08:40:44.015280 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 08:40:44.015286 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Aug 13 08:40:44.015291 kernel: ... version: 4 Aug 13 08:40:44.015297 kernel: ... bit width: 48 Aug 13 08:40:44.015302 kernel: ... generic registers: 4 Aug 13 08:40:44.015307 kernel: ... value mask: 0000ffffffffffff Aug 13 08:40:44.015313 kernel: ... max period: 00007fffffffffff Aug 13 08:40:44.015318 kernel: ... fixed-purpose events: 3 Aug 13 08:40:44.015325 kernel: ... event mask: 000000070000000f Aug 13 08:40:44.015330 kernel: signal: max sigframe size: 2032 Aug 13 08:40:44.015335 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Aug 13 08:40:44.015341 kernel: rcu: Hierarchical SRCU implementation. Aug 13 08:40:44.015346 kernel: rcu: Max phase no-delay instances is 400. Aug 13 08:40:44.015352 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Aug 13 08:40:44.015357 kernel: smp: Bringing up secondary CPUs ... Aug 13 08:40:44.015363 kernel: smpboot: x86: Booting SMP configuration: Aug 13 08:40:44.015368 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Aug 13 08:40:44.015375 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 08:40:44.015380 kernel: smp: Brought up 1 node, 16 CPUs Aug 13 08:40:44.015386 kernel: smpboot: Max logical packages: 1 Aug 13 08:40:44.015391 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Aug 13 08:40:44.015396 kernel: devtmpfs: initialized Aug 13 08:40:44.015402 kernel: x86/mm: Memory block size: 128MB Aug 13 08:40:44.015407 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) Aug 13 08:40:44.015413 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Aug 13 08:40:44.015418 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 08:40:44.015424 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 08:40:44.015430 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 08:40:44.015435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 08:40:44.015441 kernel: audit: initializing netlink subsys (disabled) Aug 13 08:40:44.015446 kernel: audit: type=2000 audit(1755074438.039:1): state=initialized audit_enabled=0 res=1 Aug 13 08:40:44.015451 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 08:40:44.015457 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 08:40:44.015462 kernel: cpuidle: using governor menu Aug 13 08:40:44.015467 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 08:40:44.015474 kernel: dca service started, version 1.12.1 Aug 13 08:40:44.015479 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Aug 13 08:40:44.015485 kernel: PCI: Using configuration type 1 for base access Aug 13 08:40:44.015490 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Aug 13 08:40:44.015495 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 08:40:44.015501 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 08:40:44.015506 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 08:40:44.015512 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 08:40:44.015518 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 08:40:44.015523 kernel: ACPI: Added _OSI(Module Device) Aug 13 08:40:44.015529 kernel: ACPI: Added _OSI(Processor Device) Aug 13 08:40:44.015534 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 08:40:44.015541 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Aug 13 08:40:44.015547 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015552 kernel: ACPI: SSDT 0xFFFF960B41AF0800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Aug 13 08:40:44.015558 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015563 kernel: ACPI: SSDT 0xFFFF960B41AEE800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Aug 13 08:40:44.015570 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015575 kernel: ACPI: SSDT 0xFFFF960B40246300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Aug 13 08:40:44.015581 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015586 kernel: ACPI: SSDT 0xFFFF960B41AE9000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Aug 13 08:40:44.015591 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015597 kernel: ACPI: SSDT 0xFFFF960B4012B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Aug 13 08:40:44.015602 kernel: ACPI: Dynamic OEM Table Load: Aug 13 08:40:44.015607 kernel: ACPI: SSDT 0xFFFF960B41AF6C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Aug 13 08:40:44.015613 kernel: ACPI: _OSC evaluated successfully for all CPUs Aug 13 08:40:44.015618 kernel: ACPI: Interpreter enabled Aug 13 08:40:44.015625 kernel: ACPI: PM: (supports S0 S5) Aug 13 08:40:44.015630 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 08:40:44.015636 kernel: HEST: Enabling Firmware First mode for corrected errors. Aug 13 08:40:44.015641 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Aug 13 08:40:44.015646 kernel: HEST: Table parsing has been initialized. Aug 13 08:40:44.015652 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Aug 13 08:40:44.015657 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 08:40:44.015663 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 08:40:44.015668 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Aug 13 08:40:44.015674 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Aug 13 08:40:44.015680 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Aug 13 08:40:44.015686 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Aug 13 08:40:44.015691 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Aug 13 08:40:44.015696 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Aug 13 08:40:44.015702 kernel: ACPI: \_TZ_.FN00: New power resource Aug 13 08:40:44.015707 kernel: ACPI: \_TZ_.FN01: New power resource Aug 13 08:40:44.015713 kernel: ACPI: \_TZ_.FN02: New power resource Aug 13 08:40:44.015718 kernel: ACPI: \_TZ_.FN03: New power resource Aug 13 08:40:44.015725 kernel: ACPI: \_TZ_.FN04: New power resource Aug 13 08:40:44.015730 kernel: ACPI: \PIN_: New power resource Aug 13 08:40:44.015735 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Aug 13 08:40:44.015812 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 08:40:44.015866 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Aug 13 08:40:44.015915 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Aug 13 08:40:44.015923 kernel: PCI host bridge to bus 0000:00 Aug 13 08:40:44.015972 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 08:40:44.016019 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 08:40:44.016063 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 08:40:44.016104 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Aug 13 08:40:44.016147 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Aug 13 08:40:44.016188 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Aug 13 08:40:44.016248 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Aug 13 08:40:44.016307 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Aug 13 08:40:44.016358 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.016413 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Aug 13 08:40:44.016462 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Aug 13 08:40:44.016514 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Aug 13 08:40:44.016567 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Aug 13 08:40:44.016624 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Aug 13 08:40:44.016672 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Aug 13 08:40:44.016721 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Aug 13 08:40:44.016773 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Aug 13 08:40:44.016822 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Aug 13 08:40:44.016869 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Aug 13 08:40:44.016923 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Aug 13 08:40:44.016974 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 08:40:44.017026 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Aug 13 08:40:44.017074 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 08:40:44.017127 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Aug 13 08:40:44.017176 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Aug 13 08:40:44.017227 kernel: pci 0000:00:16.0: PME# supported from D3hot Aug 13 08:40:44.017278 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Aug 13 08:40:44.017326 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Aug 13 08:40:44.017383 kernel: pci 0000:00:16.1: PME# supported from D3hot Aug 13 08:40:44.017434 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Aug 13 08:40:44.017483 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Aug 13 08:40:44.017530 kernel: pci 0000:00:16.4: PME# supported from D3hot Aug 13 08:40:44.017588 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Aug 13 08:40:44.017637 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Aug 13 08:40:44.017685 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Aug 13 08:40:44.017732 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Aug 13 08:40:44.017780 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Aug 13 08:40:44.017827 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Aug 13 08:40:44.017876 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Aug 13 08:40:44.017927 kernel: pci 0000:00:17.0: PME# supported from D3hot Aug 13 08:40:44.017983 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Aug 13 08:40:44.018034 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018087 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Aug 13 08:40:44.018139 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018192 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Aug 13 08:40:44.018242 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018294 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Aug 13 08:40:44.018344 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018396 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Aug 13 08:40:44.018448 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.018501 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Aug 13 08:40:44.018583 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 08:40:44.018637 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Aug 13 08:40:44.018694 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Aug 13 08:40:44.018744 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Aug 13 08:40:44.018795 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Aug 13 08:40:44.018847 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Aug 13 08:40:44.018896 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Aug 13 08:40:44.018951 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Aug 13 08:40:44.019003 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Aug 13 08:40:44.019052 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Aug 13 08:40:44.019103 kernel: pci 0000:01:00.0: PME# supported from D3cold Aug 13 08:40:44.019155 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Aug 13 08:40:44.019206 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Aug 13 08:40:44.019261 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Aug 13 08:40:44.019311 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Aug 13 08:40:44.019362 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Aug 13 08:40:44.019411 kernel: pci 0000:01:00.1: PME# supported from D3cold Aug 13 08:40:44.019461 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Aug 13 08:40:44.019513 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Aug 13 08:40:44.019567 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 08:40:44.019615 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Aug 13 08:40:44.019666 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 08:40:44.019716 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Aug 13 08:40:44.019770 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Aug 13 08:40:44.019821 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Aug 13 08:40:44.019873 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Aug 13 08:40:44.019923 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Aug 13 08:40:44.019972 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Aug 13 08:40:44.020023 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.020072 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Aug 13 08:40:44.020121 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Aug 13 08:40:44.020170 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Aug 13 08:40:44.020225 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Aug 13 08:40:44.020279 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Aug 13 08:40:44.020328 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Aug 13 08:40:44.020379 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Aug 13 08:40:44.020427 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Aug 13 08:40:44.020478 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Aug 13 08:40:44.020527 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Aug 13 08:40:44.020579 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Aug 13 08:40:44.020630 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Aug 13 08:40:44.020681 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Aug 13 08:40:44.020738 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Aug 13 08:40:44.020789 kernel: pci 0000:06:00.0: enabling Extended Tags Aug 13 08:40:44.020840 kernel: pci 0000:06:00.0: supports D1 D2 Aug 13 08:40:44.020891 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 08:40:44.020941 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Aug 13 08:40:44.020992 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Aug 13 08:40:44.021042 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Aug 13 08:40:44.021094 kernel: pci_bus 0000:07: extended config space not accessible Aug 13 08:40:44.021151 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Aug 13 08:40:44.021205 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Aug 13 08:40:44.021256 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Aug 13 08:40:44.021308 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Aug 13 08:40:44.021359 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 08:40:44.021414 kernel: pci 0000:07:00.0: supports D1 D2 Aug 13 08:40:44.021466 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 08:40:44.021517 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Aug 13 08:40:44.021570 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Aug 13 08:40:44.021620 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Aug 13 08:40:44.021628 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Aug 13 08:40:44.021635 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Aug 13 08:40:44.021642 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Aug 13 08:40:44.021648 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Aug 13 08:40:44.021654 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Aug 13 08:40:44.021660 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Aug 13 08:40:44.021665 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Aug 13 08:40:44.021671 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Aug 13 08:40:44.021677 kernel: iommu: Default domain type: Translated Aug 13 08:40:44.021683 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 08:40:44.021689 kernel: PCI: Using ACPI for IRQ routing Aug 13 08:40:44.021696 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 08:40:44.021702 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Aug 13 08:40:44.021707 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] Aug 13 08:40:44.021713 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Aug 13 08:40:44.021718 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Aug 13 08:40:44.021724 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Aug 13 08:40:44.021729 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Aug 13 08:40:44.021781 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Aug 13 08:40:44.021832 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Aug 13 08:40:44.021886 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 08:40:44.021895 kernel: vgaarb: loaded Aug 13 08:40:44.021901 kernel: clocksource: Switched to clocksource tsc-early Aug 13 08:40:44.021907 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 08:40:44.021912 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 08:40:44.021918 kernel: pnp: PnP ACPI init Aug 13 08:40:44.021969 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Aug 13 08:40:44.022017 kernel: pnp 00:02: [dma 0 disabled] Aug 13 08:40:44.022070 kernel: pnp 00:03: [dma 0 disabled] Aug 13 08:40:44.022117 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Aug 13 08:40:44.022163 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Aug 13 08:40:44.022211 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Aug 13 08:40:44.022256 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Aug 13 08:40:44.022300 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Aug 13 08:40:44.022347 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Aug 13 08:40:44.022391 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Aug 13 08:40:44.022436 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Aug 13 08:40:44.022480 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Aug 13 08:40:44.022524 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Aug 13 08:40:44.022576 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Aug 13 08:40:44.022625 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Aug 13 08:40:44.022672 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Aug 13 08:40:44.022716 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Aug 13 08:40:44.022760 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Aug 13 08:40:44.022804 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Aug 13 08:40:44.022848 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Aug 13 08:40:44.022895 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Aug 13 08:40:44.022903 kernel: pnp: PnP ACPI: found 9 devices Aug 13 08:40:44.022911 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 08:40:44.022917 kernel: NET: Registered PF_INET protocol family Aug 13 08:40:44.022923 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 08:40:44.022929 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 08:40:44.022935 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 08:40:44.022941 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 08:40:44.022946 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 08:40:44.022952 kernel: TCP: Hash tables configured (established 262144 bind 65536) Aug 13 08:40:44.022958 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 08:40:44.022965 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 08:40:44.022970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 08:40:44.022976 kernel: NET: Registered PF_XDP protocol family Aug 13 08:40:44.023026 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Aug 13 08:40:44.023075 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Aug 13 08:40:44.023124 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Aug 13 08:40:44.023176 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Aug 13 08:40:44.023226 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Aug 13 08:40:44.023278 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Aug 13 08:40:44.023328 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Aug 13 08:40:44.023378 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 08:40:44.023428 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Aug 13 08:40:44.023477 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 08:40:44.023525 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Aug 13 08:40:44.023580 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Aug 13 08:40:44.023629 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Aug 13 08:40:44.023677 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Aug 13 08:40:44.023725 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Aug 13 08:40:44.023774 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Aug 13 08:40:44.023822 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Aug 13 08:40:44.023871 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Aug 13 08:40:44.023922 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Aug 13 08:40:44.023972 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Aug 13 08:40:44.024022 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Aug 13 08:40:44.024070 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Aug 13 08:40:44.024119 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Aug 13 08:40:44.024167 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Aug 13 08:40:44.024213 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Aug 13 08:40:44.024256 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 08:40:44.024300 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 08:40:44.024345 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 08:40:44.024388 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Aug 13 08:40:44.024429 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Aug 13 08:40:44.024478 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Aug 13 08:40:44.024523 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 08:40:44.024577 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Aug 13 08:40:44.024624 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Aug 13 08:40:44.024673 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Aug 13 08:40:44.024717 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Aug 13 08:40:44.024765 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Aug 13 08:40:44.024810 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Aug 13 08:40:44.024856 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Aug 13 08:40:44.024903 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Aug 13 08:40:44.024913 kernel: PCI: CLS 64 bytes, default 64 Aug 13 08:40:44.024919 kernel: DMAR: No ATSR found Aug 13 08:40:44.024925 kernel: DMAR: No SATC found Aug 13 08:40:44.024931 kernel: DMAR: dmar0: Using Queued invalidation Aug 13 08:40:44.024980 kernel: pci 0000:00:00.0: Adding to iommu group 0 Aug 13 08:40:44.025029 kernel: pci 0000:00:01.0: Adding to iommu group 1 Aug 13 08:40:44.025078 kernel: pci 0000:00:08.0: Adding to iommu group 2 Aug 13 08:40:44.025127 kernel: pci 0000:00:12.0: Adding to iommu group 3 Aug 13 08:40:44.025177 kernel: pci 0000:00:14.0: Adding to iommu group 4 Aug 13 08:40:44.025227 kernel: pci 0000:00:14.2: Adding to iommu group 4 Aug 13 08:40:44.025275 kernel: pci 0000:00:15.0: Adding to iommu group 5 Aug 13 08:40:44.025323 kernel: pci 0000:00:15.1: Adding to iommu group 5 Aug 13 08:40:44.025372 kernel: pci 0000:00:16.0: Adding to iommu group 6 Aug 13 08:40:44.025421 kernel: pci 0000:00:16.1: Adding to iommu group 6 Aug 13 08:40:44.025470 kernel: pci 0000:00:16.4: Adding to iommu group 6 Aug 13 08:40:44.025518 kernel: pci 0000:00:17.0: Adding to iommu group 7 Aug 13 08:40:44.025570 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Aug 13 08:40:44.025621 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Aug 13 08:40:44.025670 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Aug 13 08:40:44.025718 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Aug 13 08:40:44.025766 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Aug 13 08:40:44.025814 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Aug 13 08:40:44.025863 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Aug 13 08:40:44.025911 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Aug 13 08:40:44.025960 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Aug 13 08:40:44.026012 kernel: pci 0000:01:00.0: Adding to iommu group 1 Aug 13 08:40:44.026062 kernel: pci 0000:01:00.1: Adding to iommu group 1 Aug 13 08:40:44.026112 kernel: pci 0000:03:00.0: Adding to iommu group 15 Aug 13 08:40:44.026163 kernel: pci 0000:04:00.0: Adding to iommu group 16 Aug 13 08:40:44.026212 kernel: pci 0000:06:00.0: Adding to iommu group 17 Aug 13 08:40:44.026263 kernel: pci 0000:07:00.0: Adding to iommu group 17 Aug 13 08:40:44.026272 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Aug 13 08:40:44.026278 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 08:40:44.026286 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Aug 13 08:40:44.026292 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Aug 13 08:40:44.026298 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Aug 13 08:40:44.026304 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Aug 13 08:40:44.026309 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Aug 13 08:40:44.026361 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Aug 13 08:40:44.026370 kernel: Initialise system trusted keyrings Aug 13 08:40:44.026375 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Aug 13 08:40:44.026383 kernel: Key type asymmetric registered Aug 13 08:40:44.026388 kernel: Asymmetric key parser 'x509' registered Aug 13 08:40:44.026394 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 08:40:44.026400 kernel: io scheduler mq-deadline registered Aug 13 08:40:44.026406 kernel: io scheduler kyber registered Aug 13 08:40:44.026411 kernel: io scheduler bfq registered Aug 13 08:40:44.026459 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Aug 13 08:40:44.026509 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Aug 13 08:40:44.026562 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Aug 13 08:40:44.026614 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Aug 13 08:40:44.026662 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Aug 13 08:40:44.026712 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Aug 13 08:40:44.026765 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Aug 13 08:40:44.026773 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Aug 13 08:40:44.026779 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Aug 13 08:40:44.026785 kernel: pstore: Using crash dump compression: deflate Aug 13 08:40:44.026793 kernel: pstore: Registered erst as persistent store backend Aug 13 08:40:44.026799 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 08:40:44.026805 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 08:40:44.026811 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 08:40:44.026816 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 08:40:44.026822 kernel: hpet_acpi_add: no address or irqs in _CRS Aug 13 08:40:44.026875 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Aug 13 08:40:44.026884 kernel: i8042: PNP: No PS/2 controller found. Aug 13 08:40:44.026928 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Aug 13 08:40:44.026977 kernel: rtc_cmos rtc_cmos: registered as rtc0 Aug 13 08:40:44.027022 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-08-13T08:40:42 UTC (1755074442) Aug 13 08:40:44.027066 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Aug 13 08:40:44.027074 kernel: intel_pstate: Intel P-state driver initializing Aug 13 08:40:44.027080 kernel: intel_pstate: Disabling energy efficiency optimization Aug 13 08:40:44.027086 kernel: intel_pstate: HWP enabled Aug 13 08:40:44.027092 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Aug 13 08:40:44.027098 kernel: vesafb: scrolling: redraw Aug 13 08:40:44.027105 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Aug 13 08:40:44.027111 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000005a2cb4df, using 768k, total 768k Aug 13 08:40:44.027117 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 08:40:44.027123 kernel: fb0: VESA VGA frame buffer device Aug 13 08:40:44.027128 kernel: NET: Registered PF_INET6 protocol family Aug 13 08:40:44.027134 kernel: Segment Routing with IPv6 Aug 13 08:40:44.027140 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 08:40:44.027146 kernel: NET: Registered PF_PACKET protocol family Aug 13 08:40:44.027151 kernel: Key type dns_resolver registered Aug 13 08:40:44.027158 kernel: microcode: Current revision: 0x00000102 Aug 13 08:40:44.027164 kernel: microcode: Microcode Update Driver: v2.2. Aug 13 08:40:44.027169 kernel: IPI shorthand broadcast: enabled Aug 13 08:40:44.027175 kernel: sched_clock: Marking stable (1561048071, 1378409101)->(4401160932, -1461703760) Aug 13 08:40:44.027181 kernel: registered taskstats version 1 Aug 13 08:40:44.027187 kernel: Loading compiled-in X.509 certificates Aug 13 08:40:44.027192 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 08:40:44.027198 kernel: Key type .fscrypt registered Aug 13 08:40:44.027204 kernel: Key type fscrypt-provisioning registered Aug 13 08:40:44.027210 kernel: ima: Allocated hash algorithm: sha1 Aug 13 08:40:44.027216 kernel: ima: No architecture policies found Aug 13 08:40:44.027222 kernel: clk: Disabling unused clocks Aug 13 08:40:44.027228 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 08:40:44.027233 kernel: Write protecting the kernel read-only data: 36864k Aug 13 08:40:44.027239 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 08:40:44.027245 kernel: Run /init as init process Aug 13 08:40:44.027251 kernel: with arguments: Aug 13 08:40:44.027256 kernel: /init Aug 13 08:40:44.027263 kernel: with environment: Aug 13 08:40:44.027269 kernel: HOME=/ Aug 13 08:40:44.027274 kernel: TERM=linux Aug 13 08:40:44.027280 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 08:40:44.027287 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 08:40:44.027294 systemd[1]: Detected architecture x86-64. Aug 13 08:40:44.027300 systemd[1]: Running in initrd. Aug 13 08:40:44.027307 systemd[1]: No hostname configured, using default hostname. Aug 13 08:40:44.027313 systemd[1]: Hostname set to . Aug 13 08:40:44.027319 systemd[1]: Initializing machine ID from random generator. Aug 13 08:40:44.027325 systemd[1]: Queued start job for default target initrd.target. Aug 13 08:40:44.027331 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 08:40:44.027337 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 08:40:44.027344 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 08:40:44.027350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 08:40:44.027357 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 08:40:44.027363 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 08:40:44.027369 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 08:40:44.027376 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 08:40:44.027382 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Aug 13 08:40:44.027387 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Aug 13 08:40:44.027393 kernel: clocksource: Switched to clocksource tsc Aug 13 08:40:44.027400 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 08:40:44.027406 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 08:40:44.027412 systemd[1]: Reached target paths.target - Path Units. Aug 13 08:40:44.027419 systemd[1]: Reached target slices.target - Slice Units. Aug 13 08:40:44.027425 systemd[1]: Reached target swap.target - Swaps. Aug 13 08:40:44.027431 systemd[1]: Reached target timers.target - Timer Units. Aug 13 08:40:44.027437 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 08:40:44.027443 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 08:40:44.027449 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 08:40:44.027456 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 08:40:44.027462 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 08:40:44.027468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 08:40:44.027474 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 08:40:44.027480 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 08:40:44.027486 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 08:40:44.027492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 08:40:44.027498 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 08:40:44.027505 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 08:40:44.027511 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 08:40:44.027527 systemd-journald[267]: Collecting audit messages is disabled. Aug 13 08:40:44.027543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 08:40:44.027551 systemd-journald[267]: Journal started Aug 13 08:40:44.027565 systemd-journald[267]: Runtime Journal (/run/log/journal/5e8ab259764f4c4a890329a61880395b) is 8.0M, max 639.9M, 631.9M free. Aug 13 08:40:44.050718 systemd-modules-load[268]: Inserted module 'overlay' Aug 13 08:40:44.072594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 08:40:44.101510 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 08:40:44.162487 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 08:40:44.162500 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 08:40:44.162509 kernel: Bridge firewalling registered Aug 13 08:40:44.157726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 08:40:44.162464 systemd-modules-load[268]: Inserted module 'br_netfilter' Aug 13 08:40:44.181912 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 08:40:44.202924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 08:40:44.221853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:40:44.244865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 08:40:44.247852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 08:40:44.293911 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 08:40:44.296613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 08:40:44.301492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 08:40:44.301674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 08:40:44.302396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 08:40:44.304959 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 08:40:44.305805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 08:40:44.307114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 08:40:44.310778 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 08:40:44.323202 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 08:40:44.323842 systemd-resolved[301]: Positive Trust Anchors: Aug 13 08:40:44.323847 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 08:40:44.323873 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 08:40:44.325589 systemd-resolved[301]: Defaulting to hostname 'linux'. Aug 13 08:40:44.350785 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 08:40:44.365763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 08:40:44.478214 dracut-cmdline[308]: dracut-dracut-053 Aug 13 08:40:44.485757 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 08:40:44.661577 kernel: SCSI subsystem initialized Aug 13 08:40:44.685572 kernel: Loading iSCSI transport class v2.0-870. Aug 13 08:40:44.708546 kernel: iscsi: registered transport (tcp) Aug 13 08:40:44.742699 kernel: iscsi: registered transport (qla4xxx) Aug 13 08:40:44.742716 kernel: QLogic iSCSI HBA Driver Aug 13 08:40:44.775167 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 08:40:44.792816 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 08:40:44.880814 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 08:40:44.880839 kernel: device-mapper: uevent: version 1.0.3 Aug 13 08:40:44.900513 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 08:40:44.960571 kernel: raid6: avx2x4 gen() 53128 MB/s Aug 13 08:40:44.992570 kernel: raid6: avx2x2 gen() 54055 MB/s Aug 13 08:40:45.028999 kernel: raid6: avx2x1 gen() 45292 MB/s Aug 13 08:40:45.029015 kernel: raid6: using algorithm avx2x2 gen() 54055 MB/s Aug 13 08:40:45.076058 kernel: raid6: .... xor() 31565 MB/s, rmw enabled Aug 13 08:40:45.076075 kernel: raid6: using avx2x2 recovery algorithm Aug 13 08:40:45.117581 kernel: xor: automatically using best checksumming function avx Aug 13 08:40:45.231548 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 08:40:45.237019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 08:40:45.263882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 08:40:45.270566 systemd-udevd[494]: Using default interface naming scheme 'v255'. Aug 13 08:40:45.273095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 08:40:45.295301 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 08:40:45.353269 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Aug 13 08:40:45.371686 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 08:40:45.399924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 08:40:45.463290 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 08:40:45.494586 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 08:40:45.494604 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 08:40:45.505545 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 08:40:45.507630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 08:40:45.541654 kernel: PTP clock support registered Aug 13 08:40:45.541674 kernel: libata version 3.00 loaded. Aug 13 08:40:45.520001 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 08:40:45.620106 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 08:40:45.620122 kernel: ACPI: bus type USB registered Aug 13 08:40:45.620132 kernel: usbcore: registered new interface driver usbfs Aug 13 08:40:45.620140 kernel: usbcore: registered new interface driver hub Aug 13 08:40:45.620148 kernel: usbcore: registered new device driver usb Aug 13 08:40:45.562112 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 08:40:45.987408 kernel: AES CTR mode by8 optimization enabled Aug 13 08:40:45.987425 kernel: ahci 0000:00:17.0: version 3.0 Aug 13 08:40:45.987517 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Aug 13 08:40:45.987526 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Aug 13 08:40:45.987597 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Aug 13 08:40:45.987606 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Aug 13 08:40:45.987671 kernel: igb 0000:03:00.0: added PHC on eth0 Aug 13 08:40:45.987741 kernel: scsi host0: ahci Aug 13 08:40:45.987804 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Aug 13 08:40:45.987867 kernel: scsi host1: ahci Aug 13 08:40:45.987928 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6d:9a:ae Aug 13 08:40:45.987991 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Aug 13 08:40:45.988054 kernel: scsi host2: ahci Aug 13 08:40:45.988116 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Aug 13 08:40:45.988179 kernel: scsi host3: ahci Aug 13 08:40:45.988238 kernel: igb 0000:04:00.0: added PHC on eth1 Aug 13 08:40:45.988303 kernel: scsi host4: ahci Aug 13 08:40:45.988365 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Aug 13 08:40:45.988428 kernel: scsi host5: ahci Aug 13 08:40:45.988487 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6d:9a:af Aug 13 08:40:45.988556 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Aug 13 08:40:45.988620 kernel: scsi host6: ahci Aug 13 08:40:45.988680 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Aug 13 08:40:45.988743 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Aug 13 08:40:45.988751 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Aug 13 08:40:45.988759 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Aug 13 08:40:45.649739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 08:40:46.095646 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Aug 13 08:40:46.095658 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Aug 13 08:40:46.095666 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Aug 13 08:40:46.095674 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Aug 13 08:40:46.095681 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Aug 13 08:40:46.080218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 08:40:46.136635 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Aug 13 08:40:46.132746 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 08:40:46.148589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 08:40:46.148617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 08:40:46.184647 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 08:40:46.195630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 08:40:46.195658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:40:46.216614 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 08:40:46.245684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 08:40:46.245832 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 08:40:46.291964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:40:46.312707 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 08:40:46.352503 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 08:40:46.389606 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Aug 13 08:40:46.412700 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.412741 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Aug 13 08:40:46.422585 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 13 08:40:46.437551 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.452585 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.466597 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.480578 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 13 08:40:46.495585 kernel: ata7: SATA link down (SStatus 0 SControl 300) Aug 13 08:40:46.509586 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Aug 13 08:40:46.525601 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Aug 13 08:40:46.560608 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Aug 13 08:40:46.560624 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Aug 13 08:40:46.594588 kernel: ata2.00: Features: NCQ-prio Aug 13 08:40:46.594604 kernel: ata1.00: Features: NCQ-prio Aug 13 08:40:46.631575 kernel: ata2.00: configured for UDMA/133 Aug 13 08:40:46.631596 kernel: ata1.00: configured for UDMA/133 Aug 13 08:40:46.644598 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 08:40:46.644703 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Aug 13 08:40:46.661590 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Aug 13 08:40:46.661708 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Aug 13 08:40:46.674981 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Aug 13 08:40:46.709546 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Aug 13 08:40:46.761691 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Aug 13 08:40:46.762690 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Aug 13 08:40:46.789545 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Aug 13 08:40:46.789668 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Aug 13 08:40:46.799851 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Aug 13 08:40:46.799998 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Aug 13 08:40:46.834640 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Aug 13 08:40:46.865954 kernel: hub 1-0:1.0: USB hub found Aug 13 08:40:46.866046 kernel: hub 1-0:1.0: 16 ports detected Aug 13 08:40:46.891737 kernel: hub 2-0:1.0: USB hub found Aug 13 08:40:46.891830 kernel: hub 2-0:1.0: 10 ports detected Aug 13 08:40:46.900545 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 08:40:46.913013 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:46.913029 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Aug 13 08:40:46.917727 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Aug 13 08:40:46.932688 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Aug 13 08:40:46.932770 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 08:40:46.937909 kernel: sd 1:0:0:0: [sdb] Write Protect is off Aug 13 08:40:46.943607 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 08:40:46.952774 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Aug 13 08:40:46.952856 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 08:40:46.952922 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Aug 13 08:40:46.960671 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Aug 13 08:40:46.977859 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 08:40:47.001080 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 08:40:47.001099 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Aug 13 08:40:47.001184 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Aug 13 08:40:47.001590 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Aug 13 08:40:47.015595 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Aug 13 08:40:47.015684 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:47.119983 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Aug 13 08:40:47.127624 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 08:40:47.181007 kernel: GPT:9289727 != 937703087 Aug 13 08:40:47.196007 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 08:40:47.208721 kernel: GPT:9289727 != 937703087 Aug 13 08:40:47.222876 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 08:40:47.236854 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:47.250904 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 08:40:47.251043 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 08:40:47.269628 kernel: hub 1-14:1.0: USB hub found Aug 13 08:40:47.292570 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (551) Aug 13 08:40:47.292623 kernel: hub 1-14:1.0: 4 ports detected Aug 13 08:40:47.295087 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Aug 13 08:40:47.403654 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (561) Aug 13 08:40:47.403671 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Aug 13 08:40:47.403763 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Aug 13 08:40:47.392431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Aug 13 08:40:47.420682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Aug 13 08:40:47.455981 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Aug 13 08:40:47.466706 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Aug 13 08:40:47.506943 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 08:40:47.548672 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:47.548686 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:47.548739 disk-uuid[731]: Primary Header is updated. Aug 13 08:40:47.548739 disk-uuid[731]: Secondary Entries is updated. Aug 13 08:40:47.548739 disk-uuid[731]: Secondary Header is updated. Aug 13 08:40:47.609660 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:47.609671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:47.609678 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:47.609684 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Aug 13 08:40:47.609702 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:47.714552 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 08:40:47.736355 kernel: usbcore: registered new interface driver usbhid Aug 13 08:40:47.736428 kernel: usbhid: USB HID core driver Aug 13 08:40:47.779550 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Aug 13 08:40:47.875905 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Aug 13 08:40:47.876043 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Aug 13 08:40:47.910426 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Aug 13 08:40:48.604039 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 08:40:48.624454 disk-uuid[732]: The operation has completed successfully. Aug 13 08:40:48.632741 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 08:40:48.657526 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 08:40:48.657641 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 08:40:48.695798 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 08:40:48.720744 sh[750]: Success Aug 13 08:40:48.731596 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 08:40:48.766960 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 08:40:48.779401 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 08:40:48.787165 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 08:40:48.842705 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 08:40:48.842724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 08:40:48.865385 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 08:40:48.885612 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 08:40:48.904700 kernel: BTRFS info (device dm-0): using free space tree Aug 13 08:40:48.942544 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 08:40:48.943665 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 08:40:48.943915 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 08:40:48.965823 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 08:40:48.985108 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 08:40:49.085863 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:40:49.085875 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 08:40:49.085882 kernel: BTRFS info (device sda6): using free space tree Aug 13 08:40:49.085889 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 08:40:49.085898 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 08:40:49.090863 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 08:40:49.122749 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:40:49.128827 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 08:40:49.147731 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 08:40:49.180643 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 08:40:49.205744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 08:40:49.215101 unknown[834]: fetched base config from "system" Aug 13 08:40:49.212793 ignition[834]: Ignition 2.19.0 Aug 13 08:40:49.215105 unknown[834]: fetched user config from "system" Aug 13 08:40:49.212797 ignition[834]: Stage: fetch-offline Aug 13 08:40:49.216039 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 08:40:49.212819 ignition[834]: no configs at "/usr/lib/ignition/base.d" Aug 13 08:40:49.217752 systemd-networkd[933]: lo: Link UP Aug 13 08:40:49.212825 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:40:49.217755 systemd-networkd[933]: lo: Gained carrier Aug 13 08:40:49.212899 ignition[834]: parsed url from cmdline: "" Aug 13 08:40:49.220279 systemd-networkd[933]: Enumeration completed Aug 13 08:40:49.212901 ignition[834]: no config URL provided Aug 13 08:40:49.221018 systemd-networkd[933]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 08:40:49.212903 ignition[834]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 08:40:49.236938 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 08:40:49.212940 ignition[834]: parsing config with SHA512: ee95c5714ba2ddc2801d6533abf8ae97a0962cd259d96aadfe37b3ca449ff4283bdffdbcad1bfaec7ca1e48a1b93a6e44c364f8ed6b0ce8bbdee2f39b110aee0 Aug 13 08:40:49.251331 systemd-networkd[933]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 08:40:49.215317 ignition[834]: fetch-offline: fetch-offline passed Aug 13 08:40:49.255979 systemd[1]: Reached target network.target - Network. Aug 13 08:40:49.215319 ignition[834]: POST message to Packet Timeline Aug 13 08:40:49.275774 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 08:40:49.215322 ignition[834]: POST Status error: resource requires networking Aug 13 08:40:49.279907 systemd-networkd[933]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 08:40:49.215358 ignition[834]: Ignition finished successfully Aug 13 08:40:49.285689 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 08:40:49.300756 ignition[946]: Ignition 2.19.0 Aug 13 08:40:49.486709 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Aug 13 08:40:49.483076 systemd-networkd[933]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 08:40:49.300760 ignition[946]: Stage: kargs Aug 13 08:40:49.300867 ignition[946]: no configs at "/usr/lib/ignition/base.d" Aug 13 08:40:49.300873 ignition[946]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:40:49.301388 ignition[946]: kargs: kargs passed Aug 13 08:40:49.301391 ignition[946]: POST message to Packet Timeline Aug 13 08:40:49.301400 ignition[946]: GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:40:49.301877 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56760->[::1]:53: read: connection refused Aug 13 08:40:49.502027 ignition[946]: GET https://metadata.packet.net/metadata: attempt #2 Aug 13 08:40:49.502601 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44524->[::1]:53: read: connection refused Aug 13 08:40:49.763579 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Aug 13 08:40:49.765093 systemd-networkd[933]: eno1: Link UP Aug 13 08:40:49.765235 systemd-networkd[933]: eno2: Link UP Aug 13 08:40:49.765365 systemd-networkd[933]: enp1s0f0np0: Link UP Aug 13 08:40:49.765520 systemd-networkd[933]: enp1s0f0np0: Gained carrier Aug 13 08:40:49.775735 systemd-networkd[933]: enp1s0f1np1: Link UP Aug 13 08:40:49.809722 systemd-networkd[933]: enp1s0f0np0: DHCPv4 address 147.75.71.95/31, gateway 147.75.71.94 acquired from 145.40.83.140 Aug 13 08:40:49.903536 ignition[946]: GET https://metadata.packet.net/metadata: attempt #3 Aug 13 08:40:49.904735 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46302->[::1]:53: read: connection refused Aug 13 08:40:50.522402 systemd-networkd[933]: enp1s0f1np1: Gained carrier Aug 13 08:40:50.705260 ignition[946]: GET https://metadata.packet.net/metadata: attempt #4 Aug 13 08:40:50.706409 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41540->[::1]:53: read: connection refused Aug 13 08:40:51.354183 systemd-networkd[933]: enp1s0f0np0: Gained IPv6LL Aug 13 08:40:51.802187 systemd-networkd[933]: enp1s0f1np1: Gained IPv6LL Aug 13 08:40:52.307658 ignition[946]: GET https://metadata.packet.net/metadata: attempt #5 Aug 13 08:40:52.308817 ignition[946]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46894->[::1]:53: read: connection refused Aug 13 08:40:55.511255 ignition[946]: GET https://metadata.packet.net/metadata: attempt #6 Aug 13 08:40:56.543582 ignition[946]: GET result: OK Aug 13 08:40:57.116395 ignition[946]: Ignition finished successfully Aug 13 08:40:57.121536 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 08:40:57.151856 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 08:40:57.158089 ignition[969]: Ignition 2.19.0 Aug 13 08:40:57.158094 ignition[969]: Stage: disks Aug 13 08:40:57.158206 ignition[969]: no configs at "/usr/lib/ignition/base.d" Aug 13 08:40:57.158213 ignition[969]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:40:57.158759 ignition[969]: disks: disks passed Aug 13 08:40:57.158761 ignition[969]: POST message to Packet Timeline Aug 13 08:40:57.158770 ignition[969]: GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:40:58.189622 ignition[969]: GET result: OK Aug 13 08:40:58.675494 ignition[969]: Ignition finished successfully Aug 13 08:40:58.676798 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 08:40:58.692973 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 08:40:58.711771 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 08:40:58.732832 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 08:40:58.753961 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 08:40:58.773845 systemd[1]: Reached target basic.target - Basic System. Aug 13 08:40:58.801816 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 08:40:58.834404 systemd-fsck[988]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 08:40:58.844132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 08:40:58.870818 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 08:40:58.969589 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 08:40:58.970056 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 08:40:58.980035 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 08:40:59.000681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 08:40:59.035550 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (998) Aug 13 08:40:59.065857 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:40:59.065874 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 08:40:59.084591 kernel: BTRFS info (device sda6): using free space tree Aug 13 08:40:59.098626 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 08:40:59.145781 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 08:40:59.145793 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 08:40:59.113157 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 08:40:59.166826 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Aug 13 08:40:59.186630 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 08:40:59.186649 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 08:40:59.234772 coreos-metadata[1015]: Aug 13 08:40:59.214 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 08:40:59.255740 coreos-metadata[1016]: Aug 13 08:40:59.214 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 08:40:59.198588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 08:40:59.224834 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 08:40:59.251756 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 08:40:59.307658 initrd-setup-root[1030]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 08:40:59.318669 initrd-setup-root[1037]: cut: /sysroot/etc/group: No such file or directory Aug 13 08:40:59.328655 initrd-setup-root[1044]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 08:40:59.338812 initrd-setup-root[1051]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 08:40:59.337437 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 08:40:59.350857 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 08:40:59.358348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 08:40:59.391870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 08:40:59.422813 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:40:59.433750 ignition[1118]: INFO : Ignition 2.19.0 Aug 13 08:40:59.433750 ignition[1118]: INFO : Stage: mount Aug 13 08:40:59.448780 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 08:40:59.448780 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:40:59.448780 ignition[1118]: INFO : mount: mount passed Aug 13 08:40:59.448780 ignition[1118]: INFO : POST message to Packet Timeline Aug 13 08:40:59.448780 ignition[1118]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:40:59.442841 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 08:41:00.282407 coreos-metadata[1016]: Aug 13 08:41:00.282 INFO Fetch successful Aug 13 08:41:00.362800 systemd[1]: flatcar-static-network.service: Deactivated successfully. Aug 13 08:41:00.362864 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Aug 13 08:41:00.509317 ignition[1118]: INFO : GET result: OK Aug 13 08:41:00.585437 coreos-metadata[1015]: Aug 13 08:41:00.585 INFO Fetch successful Aug 13 08:41:00.651223 coreos-metadata[1015]: Aug 13 08:41:00.651 INFO wrote hostname ci-4081.3.5-a-711ae8cc9f to /sysroot/etc/hostname Aug 13 08:41:00.652866 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 08:41:01.704703 ignition[1118]: INFO : Ignition finished successfully Aug 13 08:41:01.708144 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 08:41:01.743744 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 08:41:01.753835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 08:41:01.826448 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1144) Aug 13 08:41:01.826475 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 08:41:01.846777 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 08:41:01.865030 kernel: BTRFS info (device sda6): using free space tree Aug 13 08:41:01.904516 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 08:41:01.904535 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 08:41:01.918808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 08:41:01.943375 ignition[1161]: INFO : Ignition 2.19.0 Aug 13 08:41:01.943375 ignition[1161]: INFO : Stage: files Aug 13 08:41:01.957852 ignition[1161]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 08:41:01.957852 ignition[1161]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:41:01.957852 ignition[1161]: DEBUG : files: compiled without relabeling support, skipping Aug 13 08:41:01.957852 ignition[1161]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 08:41:01.957852 ignition[1161]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 08:41:01.957852 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 08:41:01.947501 unknown[1161]: wrote ssh authorized keys file for user: core Aug 13 08:41:02.092780 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 08:41:02.216293 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 08:41:02.232772 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 08:41:02.821035 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 08:41:03.171388 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 08:41:03.171388 ignition[1161]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 08:41:03.201776 ignition[1161]: INFO : files: files passed Aug 13 08:41:03.201776 ignition[1161]: INFO : POST message to Packet Timeline Aug 13 08:41:03.201776 ignition[1161]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:41:04.508280 ignition[1161]: INFO : GET result: OK Aug 13 08:41:05.194396 ignition[1161]: INFO : Ignition finished successfully Aug 13 08:41:05.198104 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 08:41:05.227831 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 08:41:05.228235 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 08:41:05.247052 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 08:41:05.247122 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 08:41:05.286361 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 08:41:05.300048 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 08:41:05.342730 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 08:41:05.342730 initrd-setup-root-after-ignition[1201]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 08:41:05.337656 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 08:41:05.394856 initrd-setup-root-after-ignition[1205]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 08:41:05.415755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 08:41:05.415815 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 08:41:05.434995 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 08:41:05.455717 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 08:41:05.472985 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 08:41:05.482928 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 08:41:05.565232 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 08:41:05.593079 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 08:41:05.621776 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 08:41:05.633059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 08:41:05.654240 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 08:41:05.673102 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 08:41:05.673505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 08:41:05.701382 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 08:41:05.723176 systemd[1]: Stopped target basic.target - Basic System. Aug 13 08:41:05.741177 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 08:41:05.759177 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 08:41:05.780256 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 08:41:05.801158 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 08:41:05.821153 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 08:41:05.842201 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 08:41:05.863297 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 08:41:05.884172 systemd[1]: Stopped target swap.target - Swaps. Aug 13 08:41:05.902047 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 08:41:05.902460 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 08:41:05.928253 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 08:41:05.948284 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 08:41:05.969020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 08:41:05.969461 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 08:41:05.991449 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 08:41:05.991884 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 08:41:06.024246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 08:41:06.024733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 08:41:06.045378 systemd[1]: Stopped target paths.target - Path Units. Aug 13 08:41:06.064139 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 08:41:06.064589 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 08:41:06.085174 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 08:41:06.105163 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 08:41:06.123040 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 08:41:06.123355 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 08:41:06.143201 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 08:41:06.143511 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 08:41:06.166229 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 08:41:06.282756 ignition[1225]: INFO : Ignition 2.19.0 Aug 13 08:41:06.282756 ignition[1225]: INFO : Stage: umount Aug 13 08:41:06.282756 ignition[1225]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 08:41:06.282756 ignition[1225]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 08:41:06.282756 ignition[1225]: INFO : umount: umount passed Aug 13 08:41:06.282756 ignition[1225]: INFO : POST message to Packet Timeline Aug 13 08:41:06.282756 ignition[1225]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 08:41:06.166665 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 08:41:06.186254 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 08:41:06.186663 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 08:41:06.204193 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 08:41:06.204601 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 08:41:06.234684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 08:41:06.249656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 08:41:06.249778 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 08:41:06.277758 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 08:41:06.282801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 08:41:06.282960 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 08:41:06.292049 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 08:41:06.292227 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 08:41:06.342482 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 08:41:06.342922 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 08:41:06.342982 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 08:41:06.365228 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 08:41:06.365330 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 08:41:07.324461 ignition[1225]: INFO : GET result: OK Aug 13 08:41:07.741035 ignition[1225]: INFO : Ignition finished successfully Aug 13 08:41:07.742568 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 08:41:07.742732 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 08:41:07.762868 systemd[1]: Stopped target network.target - Network. Aug 13 08:41:07.778822 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 08:41:07.779036 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 08:41:07.796933 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 08:41:07.797094 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 08:41:07.814919 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 08:41:07.815075 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 08:41:07.832951 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 08:41:07.833117 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 08:41:07.850958 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 08:41:07.851130 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 08:41:07.870279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 08:41:07.885702 systemd-networkd[933]: enp1s0f0np0: DHCPv6 lease lost Aug 13 08:41:07.889075 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 08:41:07.893693 systemd-networkd[933]: enp1s0f1np1: DHCPv6 lease lost Aug 13 08:41:07.907731 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 08:41:07.908021 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 08:41:07.927838 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 08:41:07.928194 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 08:41:07.948703 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 08:41:07.948838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 08:41:07.985750 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 08:41:08.007708 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 08:41:08.007750 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 08:41:08.026839 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 08:41:08.026933 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 08:41:08.046946 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 08:41:08.047116 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 08:41:08.065940 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 08:41:08.066110 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 08:41:08.086159 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 08:41:08.107800 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 08:41:08.108206 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 08:41:08.143644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 08:41:08.143800 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 08:41:08.148079 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 08:41:08.148189 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 08:41:08.167172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 08:41:08.167335 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 08:41:08.205122 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 08:41:08.205411 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 08:41:08.244741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 08:41:08.245010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 08:41:08.300857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 08:41:08.334618 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 08:41:08.334663 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 08:41:08.356747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 08:41:08.356910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:41:08.379039 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 08:41:08.579764 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Aug 13 08:41:08.379371 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 08:41:08.411198 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 08:41:08.411480 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 08:41:08.430835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 08:41:08.468065 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 08:41:08.509870 systemd[1]: Switching root. Aug 13 08:41:08.632771 systemd-journald[267]: Journal stopped Aug 13 08:41:11.354564 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 08:41:11.354578 kernel: SELinux: policy capability open_perms=1 Aug 13 08:41:11.354585 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 08:41:11.354592 kernel: SELinux: policy capability always_check_network=0 Aug 13 08:41:11.354597 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 08:41:11.354602 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 08:41:11.354608 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 08:41:11.354613 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 08:41:11.354619 kernel: audit: type=1403 audit(1755074468.918:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 08:41:11.354626 systemd[1]: Successfully loaded SELinux policy in 171.053ms. Aug 13 08:41:11.354634 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.378ms. Aug 13 08:41:11.354641 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 08:41:11.354647 systemd[1]: Detected architecture x86-64. Aug 13 08:41:11.354653 systemd[1]: Detected first boot. Aug 13 08:41:11.354659 systemd[1]: Hostname set to . Aug 13 08:41:11.354667 systemd[1]: Initializing machine ID from random generator. Aug 13 08:41:11.354673 zram_generator::config[1279]: No configuration found. Aug 13 08:41:11.354679 systemd[1]: Populated /etc with preset unit settings. Aug 13 08:41:11.354685 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 08:41:11.354691 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 08:41:11.354697 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 08:41:11.354704 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 08:41:11.354711 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 08:41:11.354717 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 08:41:11.354724 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 08:41:11.354730 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 08:41:11.354737 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 08:41:11.354743 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 08:41:11.354749 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 08:41:11.354756 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 08:41:11.354763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 08:41:11.354769 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 08:41:11.354775 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 08:41:11.354781 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 08:41:11.354788 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 08:41:11.354794 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Aug 13 08:41:11.354800 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 08:41:11.354807 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 08:41:11.354814 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 08:41:11.354820 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 08:41:11.354828 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 08:41:11.354835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 08:41:11.354841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 08:41:11.354848 systemd[1]: Reached target slices.target - Slice Units. Aug 13 08:41:11.354855 systemd[1]: Reached target swap.target - Swaps. Aug 13 08:41:11.354862 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 08:41:11.354868 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 08:41:11.354875 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 08:41:11.354881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 08:41:11.354887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 08:41:11.354895 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 08:41:11.354903 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 08:41:11.354910 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 08:41:11.354916 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 08:41:11.354923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 08:41:11.354929 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 08:41:11.354936 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 08:41:11.354944 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 08:41:11.354951 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 08:41:11.354957 systemd[1]: Reached target machines.target - Containers. Aug 13 08:41:11.354964 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 08:41:11.354970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 08:41:11.354977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 08:41:11.354984 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 08:41:11.354990 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 08:41:11.354997 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 08:41:11.355005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 08:41:11.355011 kernel: ACPI: bus type drm_connector registered Aug 13 08:41:11.355017 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 08:41:11.355024 kernel: fuse: init (API version 7.39) Aug 13 08:41:11.355030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 08:41:11.355036 kernel: loop: module loaded Aug 13 08:41:11.355042 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 08:41:11.355049 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 08:41:11.355057 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 08:41:11.355063 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 08:41:11.355070 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 08:41:11.355076 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 08:41:11.355090 systemd-journald[1382]: Collecting audit messages is disabled. Aug 13 08:41:11.355105 systemd-journald[1382]: Journal started Aug 13 08:41:11.355119 systemd-journald[1382]: Runtime Journal (/run/log/journal/7603d96e57a7451a86981c9f37c97e1e) is 8.0M, max 639.9M, 631.9M free. Aug 13 08:41:09.467207 systemd[1]: Queued start job for default target multi-user.target. Aug 13 08:41:09.485850 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 08:41:09.486110 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 08:41:11.383592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 08:41:11.417594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 08:41:11.451609 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 08:41:11.484591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 08:41:11.513593 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 08:41:11.513620 systemd[1]: Stopped verity-setup.service. Aug 13 08:41:11.580588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 08:41:11.601739 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 08:41:11.611145 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 08:41:11.620826 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 08:41:11.631819 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 08:41:11.642806 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 08:41:11.653790 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 08:41:11.663787 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 08:41:11.674902 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 08:41:11.686189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 08:41:11.698253 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 08:41:11.698527 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 08:41:11.710501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 08:41:11.710927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 08:41:11.722495 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 08:41:11.722933 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 08:41:11.733502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 08:41:11.733915 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 08:41:11.745480 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 08:41:11.745900 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 08:41:11.756497 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 08:41:11.756957 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 08:41:11.767499 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 08:41:11.778469 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 08:41:11.790445 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 08:41:11.802449 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 08:41:11.839301 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 08:41:11.863797 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 08:41:11.874426 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 08:41:11.884714 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 08:41:11.884741 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 08:41:11.895717 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 08:41:11.918032 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 08:41:11.930777 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 08:41:11.940813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 08:41:11.942305 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 08:41:11.952673 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 08:41:11.963694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 08:41:11.964334 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 08:41:11.967875 systemd-journald[1382]: Time spent on flushing to /var/log/journal/7603d96e57a7451a86981c9f37c97e1e is 13.753ms for 1366 entries. Aug 13 08:41:11.967875 systemd-journald[1382]: System Journal (/var/log/journal/7603d96e57a7451a86981c9f37c97e1e) is 8.0M, max 195.6M, 187.6M free. Aug 13 08:41:12.005772 systemd-journald[1382]: Received client request to flush runtime journal. Aug 13 08:41:11.981731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 08:41:11.982405 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 08:41:11.991128 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 08:41:12.000498 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 08:41:12.012499 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 08:41:12.036601 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 08:41:12.041596 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 08:41:12.068781 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 08:41:12.076545 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 08:41:12.086773 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 08:41:12.097825 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 08:41:12.117761 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 08:41:12.125545 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 08:41:12.135753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 08:41:12.146829 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 08:41:12.160638 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 08:41:12.188715 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 08:41:12.200545 kernel: loop2: detected capacity change from 0 to 8 Aug 13 08:41:12.213423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 08:41:12.225158 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 08:41:12.237788 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 08:41:12.252547 kernel: loop3: detected capacity change from 0 to 142488 Aug 13 08:41:12.262584 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Aug 13 08:41:12.262594 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Aug 13 08:41:12.263122 udevadm[1418]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 08:41:12.264993 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 08:41:12.302879 ldconfig[1408]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 08:41:12.304179 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 08:41:12.324614 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 08:41:12.356596 kernel: loop5: detected capacity change from 0 to 224512 Aug 13 08:41:12.399379 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 08:41:12.406587 kernel: loop6: detected capacity change from 0 to 8 Aug 13 08:41:12.425599 kernel: loop7: detected capacity change from 0 to 142488 Aug 13 08:41:12.434769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 08:41:12.437453 (sd-merge)[1438]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Aug 13 08:41:12.437776 (sd-merge)[1438]: Merged extensions into '/usr'. Aug 13 08:41:12.447250 systemd[1]: Reloading requested from client PID 1413 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 08:41:12.447256 systemd[1]: Reloading... Aug 13 08:41:12.447483 systemd-udevd[1441]: Using default interface naming scheme 'v255'. Aug 13 08:41:12.486550 zram_generator::config[1474]: No configuration found. Aug 13 08:41:12.511553 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1506) Aug 13 08:41:12.516553 kernel: IPMI message handler: version 39.2 Aug 13 08:41:12.516607 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 08:41:12.555308 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Aug 13 08:41:12.591647 kernel: ACPI: button: Sleep Button [SLPB] Aug 13 08:41:12.591723 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 08:41:12.616327 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Aug 13 08:41:12.616490 kernel: ACPI: button: Power Button [PWRF] Aug 13 08:41:12.616503 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Aug 13 08:41:12.620130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 08:41:12.661593 kernel: ipmi device interface Aug 13 08:41:12.661626 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Aug 13 08:41:12.673880 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Aug 13 08:41:12.673913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Aug 13 08:41:12.700628 systemd[1]: Reloading finished in 253 ms. Aug 13 08:41:12.738546 kernel: iTCO_vendor_support: vendor-support=0 Aug 13 08:41:12.752546 kernel: ipmi_si: IPMI System Interface driver Aug 13 08:41:12.752578 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Aug 13 08:41:12.752696 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Aug 13 08:41:12.763488 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Aug 13 08:41:12.828315 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Aug 13 08:41:12.845145 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Aug 13 08:41:12.862054 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Aug 13 08:41:12.880838 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Aug 13 08:41:12.916570 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Aug 13 08:41:12.916700 kernel: ipmi_si: Adding ACPI-specified kcs state machine Aug 13 08:41:12.936859 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Aug 13 08:41:12.956143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 08:41:12.982575 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Aug 13 08:41:12.983204 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Aug 13 08:41:12.992779 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 08:41:12.993544 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Aug 13 08:41:13.051606 kernel: intel_rapl_common: Found RAPL domain package Aug 13 08:41:13.051650 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Aug 13 08:41:13.051749 kernel: intel_rapl_common: Found RAPL domain core Aug 13 08:41:13.051761 kernel: intel_rapl_common: Found RAPL domain dram Aug 13 08:41:13.120796 systemd[1]: Starting ensure-sysext.service... Aug 13 08:41:13.129105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 08:41:13.140496 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 08:41:13.150205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 08:41:13.150714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 08:41:13.152005 systemd[1]: Reloading requested from client PID 1617 ('systemctl') (unit ensure-sysext.service)... Aug 13 08:41:13.152012 systemd[1]: Reloading... Aug 13 08:41:13.176550 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Aug 13 08:41:13.176678 zram_generator::config[1648]: No configuration found. Aug 13 08:41:13.211551 kernel: ipmi_ssif: IPMI SSIF Interface driver Aug 13 08:41:13.229678 systemd-tmpfiles[1621]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 08:41:13.229908 systemd-tmpfiles[1621]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 08:41:13.230458 systemd-tmpfiles[1621]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 08:41:13.230645 systemd-tmpfiles[1621]: ACLs are not supported, ignoring. Aug 13 08:41:13.230691 systemd-tmpfiles[1621]: ACLs are not supported, ignoring. Aug 13 08:41:13.232714 systemd-tmpfiles[1621]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 08:41:13.232718 systemd-tmpfiles[1621]: Skipping /boot Aug 13 08:41:13.237005 systemd-tmpfiles[1621]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 08:41:13.237010 systemd-tmpfiles[1621]: Skipping /boot Aug 13 08:41:13.249365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 08:41:13.303739 systemd[1]: Reloading finished in 151 ms. Aug 13 08:41:13.317159 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 08:41:13.341753 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 08:41:13.352791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 08:41:13.363760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 08:41:13.386787 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 08:41:13.398111 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 08:41:13.404580 augenrules[1730]: No rules Aug 13 08:41:13.409270 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 08:41:13.432962 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 08:41:13.439751 lvm[1735]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 08:41:13.444663 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 08:41:13.455267 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 08:41:13.467498 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 08:41:13.477195 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 08:41:13.486864 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 08:41:13.502653 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 08:41:13.512992 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 08:41:13.523894 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 08:41:13.536938 systemd-networkd[1619]: lo: Link UP Aug 13 08:41:13.536942 systemd-networkd[1619]: lo: Gained carrier Aug 13 08:41:13.539743 systemd-networkd[1619]: bond0: netdev ready Aug 13 08:41:13.540670 systemd-networkd[1619]: Enumeration completed Aug 13 08:41:13.551771 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 08:41:13.561173 systemd-networkd[1619]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:5f:81:38.network. Aug 13 08:41:13.563818 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 08:41:13.574712 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 08:41:13.580403 systemd-resolved[1737]: Positive Trust Anchors: Aug 13 08:41:13.580412 systemd-resolved[1737]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 08:41:13.580436 systemd-resolved[1737]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 08:41:13.583411 systemd-resolved[1737]: Using system hostname 'ci-4081.3.5-a-711ae8cc9f'. Aug 13 08:41:13.585655 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 08:41:13.585775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 08:41:13.601682 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 08:41:13.603769 lvm[1756]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 08:41:13.614176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 08:41:13.625085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 08:41:13.637105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 08:41:13.647615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 08:41:13.654044 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 08:41:13.666165 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 08:41:13.676573 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 08:41:13.676632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 08:41:13.677445 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 08:41:13.689821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 08:41:13.689895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 08:41:13.700786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 08:41:13.700860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 08:41:13.711778 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 08:41:13.711849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 08:41:13.721777 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 08:41:13.735218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 08:41:13.735362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 08:41:13.743681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 08:41:13.754173 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 08:41:13.764208 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 08:41:13.775182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 08:41:13.784684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 08:41:13.784766 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 08:41:13.784821 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 08:41:13.785456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 08:41:13.785531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 08:41:13.796836 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 08:41:13.796913 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 08:41:13.806869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 08:41:13.806948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 08:41:13.817793 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 08:41:13.817867 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 08:41:13.828551 systemd[1]: Finished ensure-sysext.service. Aug 13 08:41:13.838074 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 08:41:13.838106 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 08:41:13.848711 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 08:41:13.893446 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 08:41:13.904609 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 08:41:14.261595 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Aug 13 08:41:14.283460 systemd-networkd[1619]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:5f:81:39.network. Aug 13 08:41:14.283566 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Aug 13 08:41:14.540619 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Aug 13 08:41:14.572624 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Aug 13 08:41:14.572721 systemd-networkd[1619]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Aug 13 08:41:14.574990 systemd-networkd[1619]: enp1s0f0np0: Link UP Aug 13 08:41:14.575513 systemd-networkd[1619]: enp1s0f0np0: Gained carrier Aug 13 08:41:14.577589 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 08:41:14.601641 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Aug 13 08:41:14.610816 systemd[1]: Reached target network.target - Network. Aug 13 08:41:14.617907 systemd-networkd[1619]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:5f:81:38.network. Aug 13 08:41:14.618310 systemd-networkd[1619]: enp1s0f1np1: Link UP Aug 13 08:41:14.618648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 08:41:14.618803 systemd-networkd[1619]: enp1s0f1np1: Gained carrier Aug 13 08:41:14.629595 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 08:41:14.629685 systemd-networkd[1619]: bond0: Link UP Aug 13 08:41:14.629845 systemd-networkd[1619]: bond0: Gained carrier Aug 13 08:41:14.629938 systemd-timesyncd[1776]: Network configuration changed, trying to establish connection. Aug 13 08:41:14.639799 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 08:41:14.650593 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 08:41:14.661692 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 08:41:14.671661 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 08:41:14.682575 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 08:41:14.701575 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 08:41:14.701587 systemd[1]: Reached target paths.target - Path Units. Aug 13 08:41:14.703543 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Aug 13 08:41:14.703557 kernel: bond0: active interface up! Aug 13 08:41:14.725627 systemd[1]: Reached target timers.target - Timer Units. Aug 13 08:41:14.734306 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 08:41:14.744323 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 08:41:14.755758 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 08:41:14.765922 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 08:41:14.775731 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 08:41:14.785623 systemd[1]: Reached target basic.target - Basic System. Aug 13 08:41:14.793646 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 08:41:14.793659 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 08:41:14.799623 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 08:41:14.810262 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 08:41:14.833250 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 08:41:14.833581 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Aug 13 08:41:14.842152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 08:41:14.845869 coreos-metadata[1781]: Aug 13 08:41:14.845 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 08:41:14.852173 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 08:41:14.853029 dbus-daemon[1782]: [system] SELinux support is enabled Aug 13 08:41:14.854023 jq[1785]: false Aug 13 08:41:14.862582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 08:41:14.868740 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 08:41:14.876090 extend-filesystems[1787]: Found loop4 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found loop5 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found loop6 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found loop7 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found sda Aug 13 08:41:14.890741 extend-filesystems[1787]: Found sda1 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found sda2 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found sda3 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found usr Aug 13 08:41:14.890741 extend-filesystems[1787]: Found sda4 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found sda6 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found sda7 Aug 13 08:41:14.890741 extend-filesystems[1787]: Found sda9 Aug 13 08:41:14.890741 extend-filesystems[1787]: Checking size of /dev/sda9 Aug 13 08:41:14.890741 extend-filesystems[1787]: Resized partition /dev/sda9 Aug 13 08:41:15.093583 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Aug 13 08:41:15.093603 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1554) Aug 13 08:41:14.879240 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 08:41:15.093691 extend-filesystems[1801]: resize2fs 1.47.1 (20-May-2024) Aug 13 08:41:14.924697 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 08:41:14.959647 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 08:41:14.970427 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 08:41:15.002654 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Aug 13 08:41:15.010900 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 08:41:15.104976 update_engine[1812]: I20250813 08:41:15.045878 1812 main.cc:92] Flatcar Update Engine starting Aug 13 08:41:15.104976 update_engine[1812]: I20250813 08:41:15.046599 1812 update_check_scheduler.cc:74] Next update check in 2m55s Aug 13 08:41:15.011221 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 08:41:15.105171 jq[1813]: true Aug 13 08:41:15.038625 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 08:41:15.039568 systemd-logind[1810]: Watching system buttons on /dev/input/event3 (Power Button) Aug 13 08:41:15.039577 systemd-logind[1810]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 13 08:41:15.039587 systemd-logind[1810]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Aug 13 08:41:15.039897 systemd-logind[1810]: New seat seat0. Aug 13 08:41:15.043908 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 08:41:15.069848 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 08:41:15.096744 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 08:41:15.096842 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 08:41:15.097004 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 08:41:15.097095 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 08:41:15.114129 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 08:41:15.114217 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 08:41:15.139529 jq[1816]: true Aug 13 08:41:15.140437 (ntainerd)[1817]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 08:41:15.143898 dbus-daemon[1782]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 08:41:15.145241 tar[1815]: linux-amd64/LICENSE Aug 13 08:41:15.145404 tar[1815]: linux-amd64/helm Aug 13 08:41:15.150204 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Aug 13 08:41:15.150313 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Aug 13 08:41:15.150402 systemd[1]: Started update-engine.service - Update Engine. Aug 13 08:41:15.171371 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 08:41:15.171498 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 08:41:15.182631 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 08:41:15.182708 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 08:41:15.202134 bash[1844]: Updated "/home/core/.ssh/authorized_keys" Aug 13 08:41:15.205714 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 08:41:15.209605 sshd_keygen[1809]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 08:41:15.217473 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 08:41:15.229905 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 08:41:15.232478 locksmithd[1846]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 08:41:15.253818 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 08:41:15.262488 systemd[1]: Starting sshkeys.service... Aug 13 08:41:15.269970 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 08:41:15.270062 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 08:41:15.282525 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 08:41:15.293329 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 08:41:15.305419 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 08:41:15.317025 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 08:41:15.328386 coreos-metadata[1874]: Aug 13 08:41:15.328 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 08:41:15.328835 containerd[1817]: time="2025-08-13T08:41:15.328793245Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 08:41:15.329324 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 08:41:15.338448 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Aug 13 08:41:15.341105 containerd[1817]: time="2025-08-13T08:41:15.341083266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 08:41:15.341988 containerd[1817]: time="2025-08-13T08:41:15.341941943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 08:41:15.341988 containerd[1817]: time="2025-08-13T08:41:15.341959414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 08:41:15.341988 containerd[1817]: time="2025-08-13T08:41:15.341969563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 08:41:15.342092 containerd[1817]: time="2025-08-13T08:41:15.342053327Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 08:41:15.342092 containerd[1817]: time="2025-08-13T08:41:15.342063870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342123 containerd[1817]: time="2025-08-13T08:41:15.342095889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342123 containerd[1817]: time="2025-08-13T08:41:15.342103892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342207 containerd[1817]: time="2025-08-13T08:41:15.342194637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342207 containerd[1817]: time="2025-08-13T08:41:15.342204029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342242 containerd[1817]: time="2025-08-13T08:41:15.342211604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342242 containerd[1817]: time="2025-08-13T08:41:15.342217313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342269 containerd[1817]: time="2025-08-13T08:41:15.342260431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342401 containerd[1817]: time="2025-08-13T08:41:15.342369451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342433 containerd[1817]: time="2025-08-13T08:41:15.342425304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 08:41:15.342449 containerd[1817]: time="2025-08-13T08:41:15.342433751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 08:41:15.342482 containerd[1817]: time="2025-08-13T08:41:15.342475539Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 08:41:15.342509 containerd[1817]: time="2025-08-13T08:41:15.342503257Z" level=info msg="metadata content store policy set" policy=shared Aug 13 08:41:15.347738 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 08:41:15.352429 containerd[1817]: time="2025-08-13T08:41:15.352419911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 08:41:15.352451 containerd[1817]: time="2025-08-13T08:41:15.352440077Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 08:41:15.352466 containerd[1817]: time="2025-08-13T08:41:15.352452898Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 08:41:15.352466 containerd[1817]: time="2025-08-13T08:41:15.352462381Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 08:41:15.352492 containerd[1817]: time="2025-08-13T08:41:15.352470548Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 08:41:15.352538 containerd[1817]: time="2025-08-13T08:41:15.352531009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 08:41:15.352701 containerd[1817]: time="2025-08-13T08:41:15.352664295Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 08:41:15.352729 containerd[1817]: time="2025-08-13T08:41:15.352718087Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 08:41:15.352729 containerd[1817]: time="2025-08-13T08:41:15.352727722Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 08:41:15.352759 containerd[1817]: time="2025-08-13T08:41:15.352734650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 08:41:15.352759 containerd[1817]: time="2025-08-13T08:41:15.352742006Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 08:41:15.352759 containerd[1817]: time="2025-08-13T08:41:15.352748844Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 08:41:15.352759 containerd[1817]: time="2025-08-13T08:41:15.352755563Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 08:41:15.352817 containerd[1817]: time="2025-08-13T08:41:15.352762899Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 08:41:15.352817 containerd[1817]: time="2025-08-13T08:41:15.352770617Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 08:41:15.352817 containerd[1817]: time="2025-08-13T08:41:15.352780749Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 08:41:15.352817 containerd[1817]: time="2025-08-13T08:41:15.352788994Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 08:41:15.352817 containerd[1817]: time="2025-08-13T08:41:15.352795429Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 08:41:15.352817 containerd[1817]: time="2025-08-13T08:41:15.352807647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352817 containerd[1817]: time="2025-08-13T08:41:15.352815146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352822170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352831755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352839159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352848071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352857249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352864506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352871287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352878752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352885064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352891723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352899165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.352910 containerd[1817]: time="2025-08-13T08:41:15.352907300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352919144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352925763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352931593Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352957751Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352966709Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352973039Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352980221Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352985659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.352994342Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.353002019Z" level=info msg="NRI interface is disabled by configuration." Aug 13 08:41:15.353068 containerd[1817]: time="2025-08-13T08:41:15.353009461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 08:41:15.353218 containerd[1817]: time="2025-08-13T08:41:15.353161813Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 08:41:15.353218 containerd[1817]: time="2025-08-13T08:41:15.353195129Z" level=info msg="Connect containerd service" Aug 13 08:41:15.353218 containerd[1817]: time="2025-08-13T08:41:15.353212013Z" level=info msg="using legacy CRI server" Aug 13 08:41:15.353218 containerd[1817]: time="2025-08-13T08:41:15.353216079Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 08:41:15.353332 containerd[1817]: time="2025-08-13T08:41:15.353267116Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 08:41:15.353601 containerd[1817]: time="2025-08-13T08:41:15.353549284Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 08:41:15.353688 containerd[1817]: time="2025-08-13T08:41:15.353639805Z" level=info msg="Start subscribing containerd event" Aug 13 08:41:15.353688 containerd[1817]: time="2025-08-13T08:41:15.353675457Z" level=info msg="Start recovering state" Aug 13 08:41:15.353720 containerd[1817]: time="2025-08-13T08:41:15.353703601Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 08:41:15.353742 containerd[1817]: time="2025-08-13T08:41:15.353718694Z" level=info msg="Start event monitor" Aug 13 08:41:15.353742 containerd[1817]: time="2025-08-13T08:41:15.353730136Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 08:41:15.353773 containerd[1817]: time="2025-08-13T08:41:15.353730567Z" level=info msg="Start snapshots syncer" Aug 13 08:41:15.353773 containerd[1817]: time="2025-08-13T08:41:15.353763710Z" level=info msg="Start cni network conf syncer for default" Aug 13 08:41:15.353800 containerd[1817]: time="2025-08-13T08:41:15.353772258Z" level=info msg="Start streaming server" Aug 13 08:41:15.353814 containerd[1817]: time="2025-08-13T08:41:15.353802766Z" level=info msg="containerd successfully booted in 0.025726s" Aug 13 08:41:15.356921 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 08:41:15.434553 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Aug 13 08:41:15.459382 extend-filesystems[1801]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 08:41:15.459382 extend-filesystems[1801]: old_desc_blocks = 1, new_desc_blocks = 56 Aug 13 08:41:15.459382 extend-filesystems[1801]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Aug 13 08:41:15.488621 extend-filesystems[1787]: Resized filesystem in /dev/sda9 Aug 13 08:41:15.488621 extend-filesystems[1787]: Found sdb Aug 13 08:41:15.513573 tar[1815]: linux-amd64/README.md Aug 13 08:41:15.459935 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 08:41:15.460036 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 08:41:15.515855 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 08:41:16.377827 systemd-networkd[1619]: bond0: Gained IPv6LL Aug 13 08:41:16.447122 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 08:41:16.459588 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 08:41:16.483777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 08:41:16.494409 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 08:41:16.513055 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 08:41:17.304448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 08:41:17.323752 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 08:41:17.766285 kubelet[1916]: E0813 08:41:17.766203 1916 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 08:41:17.767294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 08:41:17.767372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 08:41:17.485900 systemd-resolved[1737]: Clock change detected. Flushing caches. Aug 13 08:41:17.510090 systemd-journald[1382]: Time jumped backwards, rotating. Aug 13 08:41:17.485953 systemd-timesyncd[1776]: Contacted time server 66.228.58.20:123 (0.flatcar.pool.ntp.org). Aug 13 08:41:17.485974 systemd-timesyncd[1776]: Initial clock synchronization to Wed 2025-08-13 08:41:17.485885 UTC. Aug 13 08:41:17.491600 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 08:41:17.532418 systemd[1]: Started sshd@0-147.75.71.95:22-147.75.109.163:39510.service - OpenSSH per-connection server daemon (147.75.109.163:39510). Aug 13 08:41:17.593275 sshd[1935]: Accepted publickey for core from 147.75.109.163 port 39510 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:17.594559 sshd[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:17.600106 systemd-logind[1810]: New session 1 of user core. Aug 13 08:41:17.601000 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 08:41:17.619504 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 08:41:17.632173 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 08:41:17.654546 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 08:41:17.666112 (systemd)[1939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 08:41:17.714477 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Aug 13 08:41:17.714928 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Aug 13 08:41:17.764299 systemd[1939]: Queued start job for default target default.target. Aug 13 08:41:17.779151 systemd[1939]: Created slice app.slice - User Application Slice. Aug 13 08:41:17.779173 systemd[1939]: Reached target paths.target - Paths. Aug 13 08:41:17.779192 systemd[1939]: Reached target timers.target - Timers. Aug 13 08:41:17.779987 systemd[1939]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 08:41:17.785953 systemd[1939]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 08:41:17.785982 systemd[1939]: Reached target sockets.target - Sockets. Aug 13 08:41:17.785992 systemd[1939]: Reached target basic.target - Basic System. Aug 13 08:41:17.786013 systemd[1939]: Reached target default.target - Main User Target. Aug 13 08:41:17.786029 systemd[1939]: Startup finished in 113ms. Aug 13 08:41:17.786104 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 08:41:17.805311 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 08:41:17.879395 systemd[1]: Started sshd@1-147.75.71.95:22-147.75.109.163:39524.service - OpenSSH per-connection server daemon (147.75.109.163:39524). Aug 13 08:41:17.924689 sshd[1952]: Accepted publickey for core from 147.75.109.163 port 39524 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:17.925299 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:17.927549 systemd-logind[1810]: New session 2 of user core. Aug 13 08:41:17.941295 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 08:41:17.998024 sshd[1952]: pam_unix(sshd:session): session closed for user core Aug 13 08:41:18.015671 systemd[1]: sshd@1-147.75.71.95:22-147.75.109.163:39524.service: Deactivated successfully. Aug 13 08:41:18.016461 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 08:41:18.017052 systemd-logind[1810]: Session 2 logged out. Waiting for processes to exit. Aug 13 08:41:18.017664 systemd[1]: Started sshd@2-147.75.71.95:22-147.75.109.163:52192.service - OpenSSH per-connection server daemon (147.75.109.163:52192). Aug 13 08:41:18.028913 systemd-logind[1810]: Removed session 2. Aug 13 08:41:18.057308 sshd[1959]: Accepted publickey for core from 147.75.109.163 port 52192 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:18.057991 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:18.060411 systemd-logind[1810]: New session 3 of user core. Aug 13 08:41:18.075443 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 08:41:18.132213 sshd[1959]: pam_unix(sshd:session): session closed for user core Aug 13 08:41:18.133573 systemd[1]: sshd@2-147.75.71.95:22-147.75.109.163:52192.service: Deactivated successfully. Aug 13 08:41:18.134356 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 08:41:18.134998 systemd-logind[1810]: Session 3 logged out. Waiting for processes to exit. Aug 13 08:41:18.135565 systemd-logind[1810]: Removed session 3. Aug 13 08:41:18.311381 coreos-metadata[1874]: Aug 13 08:41:18.311 INFO Fetch successful Aug 13 08:41:18.343808 unknown[1874]: wrote ssh authorized keys file for user: core Aug 13 08:41:18.363926 update-ssh-keys[1966]: Updated "/home/core/.ssh/authorized_keys" Aug 13 08:41:18.364461 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 08:41:18.376148 systemd[1]: Finished sshkeys.service. Aug 13 08:41:18.409868 coreos-metadata[1781]: Aug 13 08:41:18.409 INFO Fetch successful Aug 13 08:41:18.452368 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 08:41:18.481382 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Aug 13 08:41:18.896671 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Aug 13 08:41:18.909775 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 08:41:18.919908 systemd[1]: Startup finished in 1.760s (kernel) + 25.886s (initrd) + 10.622s (userspace) = 38.269s. Aug 13 08:41:18.939081 login[1887]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 08:41:18.942653 systemd-logind[1810]: New session 4 of user core. Aug 13 08:41:18.950331 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 08:41:18.958370 login[1885]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 08:41:18.961082 systemd-logind[1810]: New session 5 of user core. Aug 13 08:41:18.961785 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 08:41:27.567743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 08:41:27.581429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 08:41:27.860972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 08:41:27.863075 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 08:41:27.887567 kubelet[2008]: E0813 08:41:27.887543 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 08:41:27.889493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 08:41:27.889575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 08:41:28.150394 systemd[1]: Started sshd@3-147.75.71.95:22-147.75.109.163:44850.service - OpenSSH per-connection server daemon (147.75.109.163:44850). Aug 13 08:41:28.185319 sshd[2026]: Accepted publickey for core from 147.75.109.163 port 44850 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:28.186084 sshd[2026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:28.188922 systemd-logind[1810]: New session 6 of user core. Aug 13 08:41:28.201434 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 08:41:28.265525 sshd[2026]: pam_unix(sshd:session): session closed for user core Aug 13 08:41:28.285657 systemd[1]: sshd@3-147.75.71.95:22-147.75.109.163:44850.service: Deactivated successfully. Aug 13 08:41:28.286324 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 08:41:28.286939 systemd-logind[1810]: Session 6 logged out. Waiting for processes to exit. Aug 13 08:41:28.287573 systemd[1]: Started sshd@4-147.75.71.95:22-147.75.109.163:44856.service - OpenSSH per-connection server daemon (147.75.109.163:44856). Aug 13 08:41:28.288041 systemd-logind[1810]: Removed session 6. Aug 13 08:41:28.318459 sshd[2033]: Accepted publickey for core from 147.75.109.163 port 44856 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:28.319438 sshd[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:28.323118 systemd-logind[1810]: New session 7 of user core. Aug 13 08:41:28.335466 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 08:41:28.389909 sshd[2033]: pam_unix(sshd:session): session closed for user core Aug 13 08:41:28.418094 systemd[1]: sshd@4-147.75.71.95:22-147.75.109.163:44856.service: Deactivated successfully. Aug 13 08:41:28.421606 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 08:41:28.424840 systemd-logind[1810]: Session 7 logged out. Waiting for processes to exit. Aug 13 08:41:28.443945 systemd[1]: Started sshd@5-147.75.71.95:22-147.75.109.163:44858.service - OpenSSH per-connection server daemon (147.75.109.163:44858). Aug 13 08:41:28.446315 systemd-logind[1810]: Removed session 7. Aug 13 08:41:28.500003 sshd[2040]: Accepted publickey for core from 147.75.109.163 port 44858 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:28.501821 sshd[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:28.507960 systemd-logind[1810]: New session 8 of user core. Aug 13 08:41:28.520577 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 08:41:28.579716 sshd[2040]: pam_unix(sshd:session): session closed for user core Aug 13 08:41:28.587747 systemd[1]: sshd@5-147.75.71.95:22-147.75.109.163:44858.service: Deactivated successfully. Aug 13 08:41:28.588448 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 08:41:28.589083 systemd-logind[1810]: Session 8 logged out. Waiting for processes to exit. Aug 13 08:41:28.589778 systemd[1]: Started sshd@6-147.75.71.95:22-147.75.109.163:44866.service - OpenSSH per-connection server daemon (147.75.109.163:44866). Aug 13 08:41:28.590183 systemd-logind[1810]: Removed session 8. Aug 13 08:41:28.621672 sshd[2048]: Accepted publickey for core from 147.75.109.163 port 44866 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:28.622860 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:28.626943 systemd-logind[1810]: New session 9 of user core. Aug 13 08:41:28.639499 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 08:41:28.705619 sudo[2051]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 08:41:28.705807 sudo[2051]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 08:41:28.715933 sudo[2051]: pam_unix(sudo:session): session closed for user root Aug 13 08:41:28.716950 sshd[2048]: pam_unix(sshd:session): session closed for user core Aug 13 08:41:28.729020 systemd[1]: sshd@6-147.75.71.95:22-147.75.109.163:44866.service: Deactivated successfully. Aug 13 08:41:28.729863 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 08:41:28.730731 systemd-logind[1810]: Session 9 logged out. Waiting for processes to exit. Aug 13 08:41:28.731640 systemd[1]: Started sshd@7-147.75.71.95:22-147.75.109.163:44880.service - OpenSSH per-connection server daemon (147.75.109.163:44880). Aug 13 08:41:28.732260 systemd-logind[1810]: Removed session 9. Aug 13 08:41:28.768344 sshd[2056]: Accepted publickey for core from 147.75.109.163 port 44880 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:28.769596 sshd[2056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:28.774349 systemd-logind[1810]: New session 10 of user core. Aug 13 08:41:28.784539 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 08:41:28.842035 sudo[2061]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 08:41:28.842190 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 08:41:28.844190 sudo[2061]: pam_unix(sudo:session): session closed for user root Aug 13 08:41:28.846894 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 08:41:28.847049 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 08:41:28.862517 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 08:41:28.863425 auditctl[2064]: No rules Aug 13 08:41:28.863639 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 08:41:28.863755 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 08:41:28.865278 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 08:41:28.881182 augenrules[2082]: No rules Aug 13 08:41:28.881579 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 08:41:28.882148 sudo[2060]: pam_unix(sudo:session): session closed for user root Aug 13 08:41:28.883035 sshd[2056]: pam_unix(sshd:session): session closed for user core Aug 13 08:41:28.884947 systemd[1]: sshd@7-147.75.71.95:22-147.75.109.163:44880.service: Deactivated successfully. Aug 13 08:41:28.885698 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 08:41:28.886099 systemd-logind[1810]: Session 10 logged out. Waiting for processes to exit. Aug 13 08:41:28.887104 systemd[1]: Started sshd@8-147.75.71.95:22-147.75.109.163:44896.service - OpenSSH per-connection server daemon (147.75.109.163:44896). Aug 13 08:41:28.887669 systemd-logind[1810]: Removed session 10. Aug 13 08:41:28.921525 sshd[2090]: Accepted publickey for core from 147.75.109.163 port 44896 ssh2: RSA SHA256:J9bO4QOv3eMMxMAPUK7J8OKu4RrTNshNa8HZDHJxfKY Aug 13 08:41:28.922918 sshd[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 08:41:28.928298 systemd-logind[1810]: New session 11 of user core. Aug 13 08:41:28.936578 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 08:41:29.004140 sudo[2093]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 08:41:29.005022 sudo[2093]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 08:41:29.310563 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 08:41:29.310609 (dockerd)[2117]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 08:41:29.690276 dockerd[2117]: time="2025-08-13T08:41:29.690059659Z" level=info msg="Starting up" Aug 13 08:41:29.772549 dockerd[2117]: time="2025-08-13T08:41:29.772529881Z" level=info msg="Loading containers: start." Aug 13 08:41:29.849189 kernel: Initializing XFRM netlink socket Aug 13 08:41:29.932349 systemd-networkd[1619]: docker0: Link UP Aug 13 08:41:29.956220 dockerd[2117]: time="2025-08-13T08:41:29.956163193Z" level=info msg="Loading containers: done." Aug 13 08:41:29.965670 dockerd[2117]: time="2025-08-13T08:41:29.965650785Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 08:41:29.965734 dockerd[2117]: time="2025-08-13T08:41:29.965704266Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 08:41:29.965770 dockerd[2117]: time="2025-08-13T08:41:29.965761748Z" level=info msg="Daemon has completed initialization" Aug 13 08:41:29.966066 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2521245653-merged.mount: Deactivated successfully. Aug 13 08:41:29.979265 dockerd[2117]: time="2025-08-13T08:41:29.979197209Z" level=info msg="API listen on /run/docker.sock" Aug 13 08:41:29.979272 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 08:41:30.736636 containerd[1817]: time="2025-08-13T08:41:30.736552433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 08:41:31.282582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645807026.mount: Deactivated successfully. Aug 13 08:41:32.106587 containerd[1817]: time="2025-08-13T08:41:32.106561142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:32.106797 containerd[1817]: time="2025-08-13T08:41:32.106716723Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 08:41:32.107210 containerd[1817]: time="2025-08-13T08:41:32.107178659Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:32.108726 containerd[1817]: time="2025-08-13T08:41:32.108713596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:32.109386 containerd[1817]: time="2025-08-13T08:41:32.109371260Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.37274376s" Aug 13 08:41:32.109427 containerd[1817]: time="2025-08-13T08:41:32.109388703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 08:41:32.109775 containerd[1817]: time="2025-08-13T08:41:32.109761713Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 08:41:33.245415 containerd[1817]: time="2025-08-13T08:41:33.245389126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:33.245681 containerd[1817]: time="2025-08-13T08:41:33.245646659Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 08:41:33.245974 containerd[1817]: time="2025-08-13T08:41:33.245960106Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:33.247541 containerd[1817]: time="2025-08-13T08:41:33.247527968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:33.248644 containerd[1817]: time="2025-08-13T08:41:33.248628927Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.138847068s" Aug 13 08:41:33.248677 containerd[1817]: time="2025-08-13T08:41:33.248647139Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 08:41:33.248912 containerd[1817]: time="2025-08-13T08:41:33.248902239Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 08:41:34.177234 containerd[1817]: time="2025-08-13T08:41:34.177208094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:34.177476 containerd[1817]: time="2025-08-13T08:41:34.177456256Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 08:41:34.177787 containerd[1817]: time="2025-08-13T08:41:34.177775827Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:34.179378 containerd[1817]: time="2025-08-13T08:41:34.179337349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:34.179992 containerd[1817]: time="2025-08-13T08:41:34.179951564Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 931.032837ms" Aug 13 08:41:34.179992 containerd[1817]: time="2025-08-13T08:41:34.179967828Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 08:41:34.180234 containerd[1817]: time="2025-08-13T08:41:34.180222479Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 08:41:34.999780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844668857.mount: Deactivated successfully. Aug 13 08:41:35.194824 containerd[1817]: time="2025-08-13T08:41:35.194798645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:35.195037 containerd[1817]: time="2025-08-13T08:41:35.195024765Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 08:41:35.195330 containerd[1817]: time="2025-08-13T08:41:35.195318447Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:35.196330 containerd[1817]: time="2025-08-13T08:41:35.196289785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:35.196802 containerd[1817]: time="2025-08-13T08:41:35.196755965Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.016514025s" Aug 13 08:41:35.196802 containerd[1817]: time="2025-08-13T08:41:35.196779023Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 08:41:35.197074 containerd[1817]: time="2025-08-13T08:41:35.197064355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 08:41:35.735858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202926954.mount: Deactivated successfully. Aug 13 08:41:36.250535 containerd[1817]: time="2025-08-13T08:41:36.250513231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:36.250737 containerd[1817]: time="2025-08-13T08:41:36.250713336Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 08:41:36.251190 containerd[1817]: time="2025-08-13T08:41:36.251177949Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:36.253437 containerd[1817]: time="2025-08-13T08:41:36.253423773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:36.254096 containerd[1817]: time="2025-08-13T08:41:36.254078976Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.056998112s" Aug 13 08:41:36.254125 containerd[1817]: time="2025-08-13T08:41:36.254099643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 08:41:36.254664 containerd[1817]: time="2025-08-13T08:41:36.254648571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 08:41:36.760107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445752786.mount: Deactivated successfully. Aug 13 08:41:36.761317 containerd[1817]: time="2025-08-13T08:41:36.761299077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:36.761538 containerd[1817]: time="2025-08-13T08:41:36.761517408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 08:41:36.761805 containerd[1817]: time="2025-08-13T08:41:36.761792592Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:36.762849 containerd[1817]: time="2025-08-13T08:41:36.762836493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:36.763364 containerd[1817]: time="2025-08-13T08:41:36.763351089Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.683105ms" Aug 13 08:41:36.763392 containerd[1817]: time="2025-08-13T08:41:36.763366080Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 08:41:36.763719 containerd[1817]: time="2025-08-13T08:41:36.763709555Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 08:41:37.203490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462982644.mount: Deactivated successfully. Aug 13 08:41:38.139789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 08:41:38.148317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 08:41:38.377651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 08:41:38.378226 containerd[1817]: time="2025-08-13T08:41:38.378202609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:38.378572 containerd[1817]: time="2025-08-13T08:41:38.378548388Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 08:41:38.378951 containerd[1817]: time="2025-08-13T08:41:38.378936618Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:38.379886 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 08:41:38.381340 containerd[1817]: time="2025-08-13T08:41:38.381317216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:38.382246 containerd[1817]: time="2025-08-13T08:41:38.382225196Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.618497084s" Aug 13 08:41:38.382288 containerd[1817]: time="2025-08-13T08:41:38.382250975Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 08:41:38.414395 kubelet[2477]: E0813 08:41:38.414281 2477 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 08:41:38.416388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 08:41:38.416547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 08:41:39.905960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 08:41:39.923540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 08:41:39.940060 systemd[1]: Reloading requested from client PID 2547 ('systemctl') (unit session-11.scope)... Aug 13 08:41:39.940068 systemd[1]: Reloading... Aug 13 08:41:39.978239 zram_generator::config[2586]: No configuration found. Aug 13 08:41:40.043714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 08:41:40.103987 systemd[1]: Reloading finished in 163 ms. Aug 13 08:41:40.140036 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 08:41:40.140080 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 08:41:40.140237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 08:41:40.141058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 08:41:40.368095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 08:41:40.370402 (kubelet)[2653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 08:41:40.390551 kubelet[2653]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 08:41:40.390551 kubelet[2653]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 08:41:40.390551 kubelet[2653]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 08:41:40.390762 kubelet[2653]: I0813 08:41:40.390579 2653 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 08:41:40.579995 kubelet[2653]: I0813 08:41:40.579978 2653 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 08:41:40.579995 kubelet[2653]: I0813 08:41:40.579992 2653 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 08:41:40.580157 kubelet[2653]: I0813 08:41:40.580149 2653 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 08:41:40.599370 kubelet[2653]: E0813 08:41:40.599355 2653 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.71.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.71.95:6443: connect: connection refused" logger="UnhandledError" Aug 13 08:41:40.602796 kubelet[2653]: I0813 08:41:40.602785 2653 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 08:41:40.608101 kubelet[2653]: E0813 08:41:40.608087 2653 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 08:41:40.608131 kubelet[2653]: I0813 08:41:40.608103 2653 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 08:41:40.616513 kubelet[2653]: I0813 08:41:40.616481 2653 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 08:41:40.618371 kubelet[2653]: I0813 08:41:40.618294 2653 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 08:41:40.618409 kubelet[2653]: I0813 08:41:40.618308 2653 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-711ae8cc9f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 08:41:40.618463 kubelet[2653]: I0813 08:41:40.618415 2653 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 08:41:40.618463 kubelet[2653]: I0813 08:41:40.618421 2653 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 08:41:40.618491 kubelet[2653]: I0813 08:41:40.618486 2653 state_mem.go:36] "Initialized new in-memory state store" Aug 13 08:41:40.622275 kubelet[2653]: I0813 08:41:40.622239 2653 kubelet.go:446] "Attempting to sync node with API server" Aug 13 08:41:40.622275 kubelet[2653]: I0813 08:41:40.622265 2653 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 08:41:40.622275 kubelet[2653]: I0813 08:41:40.622274 2653 kubelet.go:352] "Adding apiserver pod source" Aug 13 08:41:40.622372 kubelet[2653]: I0813 08:41:40.622279 2653 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 08:41:40.624889 kubelet[2653]: I0813 08:41:40.624857 2653 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 08:41:40.625193 kubelet[2653]: W0813 08:41:40.625145 2653 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.71.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-711ae8cc9f&limit=500&resourceVersion=0": dial tcp 147.75.71.95:6443: connect: connection refused Aug 13 08:41:40.625228 kubelet[2653]: E0813 08:41:40.625197 2653 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.71.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-711ae8cc9f&limit=500&resourceVersion=0\": dial tcp 147.75.71.95:6443: connect: connection refused" logger="UnhandledError" Aug 13 08:41:40.625842 kubelet[2653]: I0813 08:41:40.625826 2653 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 08:41:40.625929 kubelet[2653]: W0813 08:41:40.625909 2653 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.71.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.71.95:6443: connect: connection refused Aug 13 08:41:40.625964 kubelet[2653]: E0813 08:41:40.625935 2653 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.71.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.71.95:6443: connect: connection refused" logger="UnhandledError" Aug 13 08:41:40.626813 kubelet[2653]: W0813 08:41:40.626803 2653 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 08:41:40.628523 kubelet[2653]: I0813 08:41:40.628514 2653 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 08:41:40.628599 kubelet[2653]: I0813 08:41:40.628550 2653 server.go:1287] "Started kubelet" Aug 13 08:41:40.628630 kubelet[2653]: I0813 08:41:40.628607 2653 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 08:41:40.629141 kubelet[2653]: I0813 08:41:40.629112 2653 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 08:41:40.629307 kubelet[2653]: I0813 08:41:40.629299 2653 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 08:41:40.629336 kubelet[2653]: I0813 08:41:40.629303 2653 server.go:479] "Adding debug handlers to kubelet server" Aug 13 08:41:40.629430 kubelet[2653]: I0813 08:41:40.629420 2653 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 08:41:40.629430 kubelet[2653]: I0813 08:41:40.629424 2653 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 08:41:40.629496 kubelet[2653]: I0813 08:41:40.629450 2653 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 08:41:40.629496 kubelet[2653]: I0813 08:41:40.629470 2653 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 08:41:40.629496 kubelet[2653]: E0813 08:41:40.629467 2653 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" Aug 13 08:41:40.629570 kubelet[2653]: I0813 08:41:40.629508 2653 reconciler.go:26] "Reconciler: start to sync state" Aug 13 08:41:40.629649 kubelet[2653]: E0813 08:41:40.629623 2653 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-711ae8cc9f?timeout=10s\": dial tcp 147.75.71.95:6443: connect: connection refused" interval="200ms" Aug 13 08:41:40.629699 kubelet[2653]: W0813 08:41:40.629674 2653 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.71.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.71.95:6443: connect: connection refused Aug 13 08:41:40.629732 kubelet[2653]: E0813 08:41:40.629709 2653 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.71.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.71.95:6443: connect: connection refused" logger="UnhandledError" Aug 13 08:41:40.629763 kubelet[2653]: I0813 08:41:40.629737 2653 factory.go:221] Registration of the systemd container factory successfully Aug 13 08:41:40.629799 kubelet[2653]: I0813 08:41:40.629787 2653 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 08:41:40.630210 kubelet[2653]: I0813 08:41:40.630201 2653 factory.go:221] Registration of the containerd container factory successfully Aug 13 08:41:40.630249 kubelet[2653]: E0813 08:41:40.630225 2653 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 08:41:40.634668 kubelet[2653]: E0813 08:41:40.632689 2653 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.71.95:6443/api/v1/namespaces/default/events\": dial tcp 147.75.71.95:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-a-711ae8cc9f.185b46f5a44f25be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-a-711ae8cc9f,UID:ci-4081.3.5-a-711ae8cc9f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-a-711ae8cc9f,},FirstTimestamp:2025-08-13 08:41:40.62852243 +0000 UTC m=+0.256221474,LastTimestamp:2025-08-13 08:41:40.62852243 +0000 UTC m=+0.256221474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-a-711ae8cc9f,}" Aug 13 08:41:40.639138 kubelet[2653]: I0813 08:41:40.639117 2653 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 08:41:40.639651 kubelet[2653]: I0813 08:41:40.639642 2653 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 08:41:40.639674 kubelet[2653]: I0813 08:41:40.639654 2653 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 08:41:40.639674 kubelet[2653]: I0813 08:41:40.639665 2653 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 08:41:40.639674 kubelet[2653]: I0813 08:41:40.639670 2653 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 08:41:40.639740 kubelet[2653]: E0813 08:41:40.639693 2653 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 08:41:40.639916 kubelet[2653]: W0813 08:41:40.639901 2653 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.71.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.71.95:6443: connect: connection refused Aug 13 08:41:40.639952 kubelet[2653]: E0813 08:41:40.639926 2653 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.71.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.71.95:6443: connect: connection refused" logger="UnhandledError" Aug 13 08:41:40.730620 kubelet[2653]: E0813 08:41:40.730559 2653 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" Aug 13 08:41:40.740213 kubelet[2653]: E0813 08:41:40.740064 2653 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 08:41:40.806132 kubelet[2653]: I0813 08:41:40.806070 2653 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 08:41:40.806132 kubelet[2653]: I0813 08:41:40.806118 2653 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 08:41:40.806479 kubelet[2653]: I0813 08:41:40.806161 2653 state_mem.go:36] "Initialized new in-memory state store" Aug 13 08:41:40.808183 kubelet[2653]: I0813 08:41:40.808166 2653 policy_none.go:49] "None policy: Start" Aug 13 08:41:40.808223 kubelet[2653]: I0813 08:41:40.808197 2653 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 08:41:40.808223 kubelet[2653]: I0813 08:41:40.808212 2653 state_mem.go:35] "Initializing new in-memory state store" Aug 13 08:41:40.811576 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 08:41:40.822922 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 08:41:40.824879 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 08:41:40.830666 kubelet[2653]: E0813 08:41:40.830648 2653 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-711ae8cc9f?timeout=10s\": dial tcp 147.75.71.95:6443: connect: connection refused" interval="400ms" Aug 13 08:41:40.830666 kubelet[2653]: E0813 08:41:40.830654 2653 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" Aug 13 08:41:40.833819 kubelet[2653]: I0813 08:41:40.833805 2653 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 08:41:40.833926 kubelet[2653]: I0813 08:41:40.833915 2653 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 08:41:40.833963 kubelet[2653]: I0813 08:41:40.833924 2653 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 08:41:40.834018 kubelet[2653]: I0813 08:41:40.834008 2653 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 08:41:40.834406 kubelet[2653]: E0813 08:41:40.834395 2653 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 08:41:40.834459 kubelet[2653]: E0813 08:41:40.834418 2653 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-a-711ae8cc9f\" not found" Aug 13 08:41:40.938903 kubelet[2653]: I0813 08:41:40.938705 2653 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:40.939571 kubelet[2653]: E0813 08:41:40.939513 2653 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.95:6443/api/v1/nodes\": dial tcp 147.75.71.95:6443: connect: connection refused" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:40.948164 systemd[1]: Created slice kubepods-burstable-pod34768ca8fe62fb40f5cc1efcb1fb8278.slice - libcontainer container kubepods-burstable-pod34768ca8fe62fb40f5cc1efcb1fb8278.slice. Aug 13 08:41:40.975523 kubelet[2653]: E0813 08:41:40.975430 2653 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:40.984168 systemd[1]: Created slice kubepods-burstable-pode9c3286da970c801b86433b280118f6e.slice - libcontainer container kubepods-burstable-pode9c3286da970c801b86433b280118f6e.slice. Aug 13 08:41:40.988726 kubelet[2653]: E0813 08:41:40.988636 2653 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.007082 systemd[1]: Created slice kubepods-burstable-pod887353a58934c5b2cdd931b184c9518b.slice - libcontainer container kubepods-burstable-pod887353a58934c5b2cdd931b184c9518b.slice. Aug 13 08:41:41.011936 kubelet[2653]: E0813 08:41:41.011847 2653 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032423 kubelet[2653]: I0813 08:41:41.032334 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/887353a58934c5b2cdd931b184c9518b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-711ae8cc9f\" (UID: \"887353a58934c5b2cdd931b184c9518b\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032607 kubelet[2653]: I0813 08:41:41.032438 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34768ca8fe62fb40f5cc1efcb1fb8278-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" (UID: \"34768ca8fe62fb40f5cc1efcb1fb8278\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032607 kubelet[2653]: I0813 08:41:41.032499 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34768ca8fe62fb40f5cc1efcb1fb8278-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" (UID: \"34768ca8fe62fb40f5cc1efcb1fb8278\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032607 kubelet[2653]: I0813 08:41:41.032554 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34768ca8fe62fb40f5cc1efcb1fb8278-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" (UID: \"34768ca8fe62fb40f5cc1efcb1fb8278\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032858 kubelet[2653]: I0813 08:41:41.032607 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032858 kubelet[2653]: I0813 08:41:41.032652 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032858 kubelet[2653]: I0813 08:41:41.032694 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032858 kubelet[2653]: I0813 08:41:41.032738 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.032858 kubelet[2653]: I0813 08:41:41.032785 2653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.144480 kubelet[2653]: I0813 08:41:41.144423 2653 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.145284 kubelet[2653]: E0813 08:41:41.145149 2653 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.95:6443/api/v1/nodes\": dial tcp 147.75.71.95:6443: connect: connection refused" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.231758 kubelet[2653]: E0813 08:41:41.231512 2653 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-711ae8cc9f?timeout=10s\": dial tcp 147.75.71.95:6443: connect: connection refused" interval="800ms" Aug 13 08:41:41.277720 containerd[1817]: time="2025-08-13T08:41:41.277632362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-711ae8cc9f,Uid:34768ca8fe62fb40f5cc1efcb1fb8278,Namespace:kube-system,Attempt:0,}" Aug 13 08:41:41.290977 containerd[1817]: time="2025-08-13T08:41:41.290870277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-711ae8cc9f,Uid:e9c3286da970c801b86433b280118f6e,Namespace:kube-system,Attempt:0,}" Aug 13 08:41:41.313727 containerd[1817]: time="2025-08-13T08:41:41.313612854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-711ae8cc9f,Uid:887353a58934c5b2cdd931b184c9518b,Namespace:kube-system,Attempt:0,}" Aug 13 08:41:41.516771 kubelet[2653]: W0813 08:41:41.516676 2653 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.71.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.71.95:6443: connect: connection refused Aug 13 08:41:41.516771 kubelet[2653]: E0813 08:41:41.516715 2653 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.71.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.71.95:6443: connect: connection refused" logger="UnhandledError" Aug 13 08:41:41.547852 kubelet[2653]: I0813 08:41:41.547808 2653 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.548153 kubelet[2653]: E0813 08:41:41.548105 2653 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.95:6443/api/v1/nodes\": dial tcp 147.75.71.95:6443: connect: connection refused" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:41.825717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987383418.mount: Deactivated successfully. Aug 13 08:41:41.826887 containerd[1817]: time="2025-08-13T08:41:41.826842321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 08:41:41.827107 containerd[1817]: time="2025-08-13T08:41:41.827063933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 08:41:41.827701 containerd[1817]: time="2025-08-13T08:41:41.827660604Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 08:41:41.827947 containerd[1817]: time="2025-08-13T08:41:41.827905112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 08:41:41.828512 containerd[1817]: time="2025-08-13T08:41:41.828474213Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 08:41:41.828993 containerd[1817]: time="2025-08-13T08:41:41.828955788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 08:41:41.829955 containerd[1817]: time="2025-08-13T08:41:41.829916787Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 08:41:41.830903 containerd[1817]: time="2025-08-13T08:41:41.830863193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.988414ms" Aug 13 08:41:41.831620 containerd[1817]: time="2025-08-13T08:41:41.831567292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.564311ms" Aug 13 08:41:41.831904 containerd[1817]: time="2025-08-13T08:41:41.831862271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 08:41:41.833152 containerd[1817]: time="2025-08-13T08:41:41.833115276Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.401181ms" Aug 13 08:41:41.953013 containerd[1817]: time="2025-08-13T08:41:41.952962739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:41:41.953013 containerd[1817]: time="2025-08-13T08:41:41.952999155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:41:41.953140 containerd[1817]: time="2025-08-13T08:41:41.953016177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:41.953217 containerd[1817]: time="2025-08-13T08:41:41.952957335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:41:41.953217 containerd[1817]: time="2025-08-13T08:41:41.953211763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:41:41.953274 containerd[1817]: time="2025-08-13T08:41:41.953220070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:41.953303 containerd[1817]: time="2025-08-13T08:41:41.953274094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:41.953323 containerd[1817]: time="2025-08-13T08:41:41.953288230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:41.953323 containerd[1817]: time="2025-08-13T08:41:41.953274777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:41:41.953323 containerd[1817]: time="2025-08-13T08:41:41.953309185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:41:41.953323 containerd[1817]: time="2025-08-13T08:41:41.953318984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:41.953419 containerd[1817]: time="2025-08-13T08:41:41.953399523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:41.974484 systemd[1]: Started cri-containerd-502c3537774cd7e9467a620bf76fe2f991a326a05d4e02a035bf7bc0379dea80.scope - libcontainer container 502c3537774cd7e9467a620bf76fe2f991a326a05d4e02a035bf7bc0379dea80. Aug 13 08:41:41.975248 systemd[1]: Started cri-containerd-88ff1df82b564b7aeb50e12adb7246060baba6ed4f919de4dfc20ca86aa56b9a.scope - libcontainer container 88ff1df82b564b7aeb50e12adb7246060baba6ed4f919de4dfc20ca86aa56b9a. Aug 13 08:41:41.976135 systemd[1]: Started cri-containerd-a7bb542ef9ffc962341d4fe58c164f19786050c4012e91e86cb937e24eeb2655.scope - libcontainer container a7bb542ef9ffc962341d4fe58c164f19786050c4012e91e86cb937e24eeb2655. Aug 13 08:41:41.979888 kubelet[2653]: W0813 08:41:41.979856 2653 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.71.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.71.95:6443: connect: connection refused Aug 13 08:41:41.979957 kubelet[2653]: E0813 08:41:41.979896 2653 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.71.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.71.95:6443: connect: connection refused" logger="UnhandledError" Aug 13 08:41:41.998086 containerd[1817]: time="2025-08-13T08:41:41.998062221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-711ae8cc9f,Uid:e9c3286da970c801b86433b280118f6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"502c3537774cd7e9467a620bf76fe2f991a326a05d4e02a035bf7bc0379dea80\"" Aug 13 08:41:41.998270 containerd[1817]: time="2025-08-13T08:41:41.998252023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-711ae8cc9f,Uid:887353a58934c5b2cdd931b184c9518b,Namespace:kube-system,Attempt:0,} returns sandbox id \"88ff1df82b564b7aeb50e12adb7246060baba6ed4f919de4dfc20ca86aa56b9a\"" Aug 13 08:41:41.998645 kubelet[2653]: W0813 08:41:41.998629 2653 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.71.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.71.95:6443: connect: connection refused Aug 13 08:41:41.998691 kubelet[2653]: E0813 08:41:41.998654 2653 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.71.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.71.95:6443: connect: connection refused" logger="UnhandledError" Aug 13 08:41:41.998821 containerd[1817]: time="2025-08-13T08:41:41.998804941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-711ae8cc9f,Uid:34768ca8fe62fb40f5cc1efcb1fb8278,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7bb542ef9ffc962341d4fe58c164f19786050c4012e91e86cb937e24eeb2655\"" Aug 13 08:41:42.000256 containerd[1817]: time="2025-08-13T08:41:42.000232497Z" level=info msg="CreateContainer within sandbox \"502c3537774cd7e9467a620bf76fe2f991a326a05d4e02a035bf7bc0379dea80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 08:41:42.000327 containerd[1817]: time="2025-08-13T08:41:42.000232496Z" level=info msg="CreateContainer within sandbox \"88ff1df82b564b7aeb50e12adb7246060baba6ed4f919de4dfc20ca86aa56b9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 08:41:42.000491 containerd[1817]: time="2025-08-13T08:41:42.000479787Z" level=info msg="CreateContainer within sandbox \"a7bb542ef9ffc962341d4fe58c164f19786050c4012e91e86cb937e24eeb2655\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 08:41:42.007426 containerd[1817]: time="2025-08-13T08:41:42.007409377Z" level=info msg="CreateContainer within sandbox \"88ff1df82b564b7aeb50e12adb7246060baba6ed4f919de4dfc20ca86aa56b9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11ce20f4cfa1cbaf78487b95f0bcf813687b6e20df9910530da45358f0258c0a\"" Aug 13 08:41:42.007693 containerd[1817]: time="2025-08-13T08:41:42.007651527Z" level=info msg="StartContainer for \"11ce20f4cfa1cbaf78487b95f0bcf813687b6e20df9910530da45358f0258c0a\"" Aug 13 08:41:42.008307 containerd[1817]: time="2025-08-13T08:41:42.008295793Z" level=info msg="CreateContainer within sandbox \"a7bb542ef9ffc962341d4fe58c164f19786050c4012e91e86cb937e24eeb2655\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ea627cd241bf13874f1d608a3e13f726aa98093c8335fbab2215d378b619652\"" Aug 13 08:41:42.008458 containerd[1817]: time="2025-08-13T08:41:42.008448054Z" level=info msg="StartContainer for \"2ea627cd241bf13874f1d608a3e13f726aa98093c8335fbab2215d378b619652\"" Aug 13 08:41:42.009463 containerd[1817]: time="2025-08-13T08:41:42.009449584Z" level=info msg="CreateContainer within sandbox \"502c3537774cd7e9467a620bf76fe2f991a326a05d4e02a035bf7bc0379dea80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6f50712538db6612115a966e47e3ed5b59355696f8b0be13bc12e32be757211a\"" Aug 13 08:41:42.009642 containerd[1817]: time="2025-08-13T08:41:42.009631831Z" level=info msg="StartContainer for \"6f50712538db6612115a966e47e3ed5b59355696f8b0be13bc12e32be757211a\"" Aug 13 08:41:42.032754 kubelet[2653]: E0813 08:41:42.032734 2653 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-711ae8cc9f?timeout=10s\": dial tcp 147.75.71.95:6443: connect: connection refused" interval="1.6s" Aug 13 08:41:42.033430 systemd[1]: Started cri-containerd-11ce20f4cfa1cbaf78487b95f0bcf813687b6e20df9910530da45358f0258c0a.scope - libcontainer container 11ce20f4cfa1cbaf78487b95f0bcf813687b6e20df9910530da45358f0258c0a. Aug 13 08:41:42.034019 systemd[1]: Started cri-containerd-2ea627cd241bf13874f1d608a3e13f726aa98093c8335fbab2215d378b619652.scope - libcontainer container 2ea627cd241bf13874f1d608a3e13f726aa98093c8335fbab2215d378b619652. Aug 13 08:41:42.035585 systemd[1]: Started cri-containerd-6f50712538db6612115a966e47e3ed5b59355696f8b0be13bc12e32be757211a.scope - libcontainer container 6f50712538db6612115a966e47e3ed5b59355696f8b0be13bc12e32be757211a. Aug 13 08:41:42.057295 containerd[1817]: time="2025-08-13T08:41:42.057269917Z" level=info msg="StartContainer for \"11ce20f4cfa1cbaf78487b95f0bcf813687b6e20df9910530da45358f0258c0a\" returns successfully" Aug 13 08:41:42.058010 containerd[1817]: time="2025-08-13T08:41:42.057994352Z" level=info msg="StartContainer for \"2ea627cd241bf13874f1d608a3e13f726aa98093c8335fbab2215d378b619652\" returns successfully" Aug 13 08:41:42.059058 containerd[1817]: time="2025-08-13T08:41:42.059041818Z" level=info msg="StartContainer for \"6f50712538db6612115a966e47e3ed5b59355696f8b0be13bc12e32be757211a\" returns successfully" Aug 13 08:41:42.350003 kubelet[2653]: I0813 08:41:42.349953 2653 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.644145 kubelet[2653]: E0813 08:41:42.644098 2653 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.644332 kubelet[2653]: E0813 08:41:42.644208 2653 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.644990 kubelet[2653]: E0813 08:41:42.644982 2653 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.852727 kubelet[2653]: I0813 08:41:42.852673 2653 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.930224 kubelet[2653]: I0813 08:41:42.930001 2653 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.940893 kubelet[2653]: E0813 08:41:42.940829 2653 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.940893 kubelet[2653]: I0813 08:41:42.940885 2653 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.944791 kubelet[2653]: E0813 08:41:42.944733 2653 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.944791 kubelet[2653]: I0813 08:41:42.944787 2653 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:42.948474 kubelet[2653]: E0813 08:41:42.948420 2653 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-a-711ae8cc9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:43.624591 kubelet[2653]: I0813 08:41:43.624504 2653 apiserver.go:52] "Watching apiserver" Aug 13 08:41:43.630277 kubelet[2653]: I0813 08:41:43.630222 2653 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 08:41:43.648494 kubelet[2653]: I0813 08:41:43.648447 2653 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:43.648835 kubelet[2653]: I0813 08:41:43.648611 2653 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:43.649651 kubelet[2653]: E0813 08:41:43.649642 2653 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-a-711ae8cc9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:43.649730 kubelet[2653]: E0813 08:41:43.649723 2653 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:45.317695 systemd[1]: Reloading requested from client PID 2972 ('systemctl') (unit session-11.scope)... Aug 13 08:41:45.317703 systemd[1]: Reloading... Aug 13 08:41:45.354267 zram_generator::config[3011]: No configuration found. Aug 13 08:41:45.420140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 08:41:45.488569 systemd[1]: Reloading finished in 170 ms. Aug 13 08:41:45.526493 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 08:41:45.536613 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 08:41:45.536718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 08:41:45.547668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 08:41:45.800524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 08:41:45.802951 (kubelet)[3075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 08:41:45.823508 kubelet[3075]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 08:41:45.823508 kubelet[3075]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 08:41:45.823508 kubelet[3075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 08:41:45.823731 kubelet[3075]: I0813 08:41:45.823530 3075 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 08:41:45.826951 kubelet[3075]: I0813 08:41:45.826910 3075 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 08:41:45.826951 kubelet[3075]: I0813 08:41:45.826921 3075 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 08:41:45.827055 kubelet[3075]: I0813 08:41:45.827050 3075 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 08:41:45.827728 kubelet[3075]: I0813 08:41:45.827690 3075 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 08:41:45.828902 kubelet[3075]: I0813 08:41:45.828865 3075 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 08:41:45.830609 kubelet[3075]: E0813 08:41:45.830596 3075 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 08:41:45.830644 kubelet[3075]: I0813 08:41:45.830611 3075 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 08:41:45.837441 kubelet[3075]: I0813 08:41:45.837432 3075 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 08:41:45.837583 kubelet[3075]: I0813 08:41:45.837533 3075 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 08:41:45.837666 kubelet[3075]: I0813 08:41:45.837547 3075 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-711ae8cc9f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 08:41:45.837666 kubelet[3075]: I0813 08:41:45.837643 3075 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 08:41:45.837666 kubelet[3075]: I0813 08:41:45.837657 3075 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 08:41:45.837762 kubelet[3075]: I0813 08:41:45.837688 3075 state_mem.go:36] "Initialized new in-memory state store" Aug 13 08:41:45.837820 kubelet[3075]: I0813 08:41:45.837788 3075 kubelet.go:446] "Attempting to sync node with API server" Aug 13 08:41:45.837820 kubelet[3075]: I0813 08:41:45.837799 3075 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 08:41:45.837820 kubelet[3075]: I0813 08:41:45.837809 3075 kubelet.go:352] "Adding apiserver pod source" Aug 13 08:41:45.837820 kubelet[3075]: I0813 08:41:45.837814 3075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 08:41:45.838114 kubelet[3075]: I0813 08:41:45.838101 3075 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 08:41:45.838395 kubelet[3075]: I0813 08:41:45.838362 3075 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 08:41:45.838682 kubelet[3075]: I0813 08:41:45.838661 3075 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 08:41:45.838682 kubelet[3075]: I0813 08:41:45.838683 3075 server.go:1287] "Started kubelet" Aug 13 08:41:45.838788 kubelet[3075]: I0813 08:41:45.838749 3075 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 08:41:45.838822 kubelet[3075]: I0813 08:41:45.838782 3075 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 08:41:45.839255 kubelet[3075]: I0813 08:41:45.839144 3075 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 08:41:45.840462 kubelet[3075]: E0813 08:41:45.840448 3075 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 08:41:45.840660 kubelet[3075]: I0813 08:41:45.840652 3075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 08:41:45.840691 kubelet[3075]: I0813 08:41:45.840669 3075 server.go:479] "Adding debug handlers to kubelet server" Aug 13 08:41:45.840728 kubelet[3075]: I0813 08:41:45.840707 3075 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 08:41:45.840760 kubelet[3075]: E0813 08:41:45.840736 3075 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-711ae8cc9f\" not found" Aug 13 08:41:45.840760 kubelet[3075]: I0813 08:41:45.840742 3075 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 08:41:45.840811 kubelet[3075]: I0813 08:41:45.840771 3075 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 08:41:45.840874 kubelet[3075]: I0813 08:41:45.840866 3075 reconciler.go:26] "Reconciler: start to sync state" Aug 13 08:41:45.841745 kubelet[3075]: I0813 08:41:45.841734 3075 factory.go:221] Registration of the containerd container factory successfully Aug 13 08:41:45.841745 kubelet[3075]: I0813 08:41:45.841744 3075 factory.go:221] Registration of the systemd container factory successfully Aug 13 08:41:45.841823 kubelet[3075]: I0813 08:41:45.841812 3075 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 08:41:45.846255 kubelet[3075]: I0813 08:41:45.846232 3075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 08:41:45.846783 kubelet[3075]: I0813 08:41:45.846773 3075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 08:41:45.846830 kubelet[3075]: I0813 08:41:45.846788 3075 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 08:41:45.846830 kubelet[3075]: I0813 08:41:45.846801 3075 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 08:41:45.846830 kubelet[3075]: I0813 08:41:45.846805 3075 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 08:41:45.846879 kubelet[3075]: E0813 08:41:45.846830 3075 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 08:41:45.857085 kubelet[3075]: I0813 08:41:45.857072 3075 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 08:41:45.857085 kubelet[3075]: I0813 08:41:45.857082 3075 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 08:41:45.857085 kubelet[3075]: I0813 08:41:45.857093 3075 state_mem.go:36] "Initialized new in-memory state store" Aug 13 08:41:45.857202 kubelet[3075]: I0813 08:41:45.857196 3075 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 08:41:45.857223 kubelet[3075]: I0813 08:41:45.857203 3075 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 08:41:45.857223 kubelet[3075]: I0813 08:41:45.857215 3075 policy_none.go:49] "None policy: Start" Aug 13 08:41:45.857223 kubelet[3075]: I0813 08:41:45.857220 3075 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 08:41:45.857267 kubelet[3075]: I0813 08:41:45.857226 3075 state_mem.go:35] "Initializing new in-memory state store" Aug 13 08:41:45.857288 kubelet[3075]: I0813 08:41:45.857283 3075 state_mem.go:75] "Updated machine memory state" Aug 13 08:41:45.859033 kubelet[3075]: I0813 08:41:45.859026 3075 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 08:41:45.859115 kubelet[3075]: I0813 08:41:45.859109 3075 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 08:41:45.859144 kubelet[3075]: I0813 08:41:45.859116 3075 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 08:41:45.859228 kubelet[3075]: I0813 08:41:45.859216 3075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 08:41:45.859516 kubelet[3075]: E0813 08:41:45.859480 3075 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 08:41:45.948534 kubelet[3075]: I0813 08:41:45.948432 3075 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:45.948534 kubelet[3075]: I0813 08:41:45.948457 3075 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:45.948873 kubelet[3075]: I0813 08:41:45.948574 3075 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:45.956147 kubelet[3075]: W0813 08:41:45.956094 3075 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 08:41:45.956415 kubelet[3075]: W0813 08:41:45.956229 3075 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 08:41:45.956415 kubelet[3075]: W0813 08:41:45.956235 3075 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 08:41:45.966035 kubelet[3075]: I0813 08:41:45.965943 3075 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:45.975485 kubelet[3075]: I0813 08:41:45.975426 3075 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:45.975693 kubelet[3075]: I0813 08:41:45.975575 3075 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.042447 kubelet[3075]: I0813 08:41:46.042322 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/887353a58934c5b2cdd931b184c9518b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-711ae8cc9f\" (UID: \"887353a58934c5b2cdd931b184c9518b\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.042447 kubelet[3075]: I0813 08:41:46.042432 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.042773 kubelet[3075]: I0813 08:41:46.042495 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.042773 kubelet[3075]: I0813 08:41:46.042585 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34768ca8fe62fb40f5cc1efcb1fb8278-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" (UID: \"34768ca8fe62fb40f5cc1efcb1fb8278\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.042773 kubelet[3075]: I0813 08:41:46.042643 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.042773 kubelet[3075]: I0813 08:41:46.042713 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.042773 kubelet[3075]: I0813 08:41:46.042764 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9c3286da970c801b86433b280118f6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" (UID: \"e9c3286da970c801b86433b280118f6e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.043356 kubelet[3075]: I0813 08:41:46.042815 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34768ca8fe62fb40f5cc1efcb1fb8278-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" (UID: \"34768ca8fe62fb40f5cc1efcb1fb8278\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.043356 kubelet[3075]: I0813 08:41:46.042865 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34768ca8fe62fb40f5cc1efcb1fb8278-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" (UID: \"34768ca8fe62fb40f5cc1efcb1fb8278\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.838880 kubelet[3075]: I0813 08:41:46.838813 3075 apiserver.go:52] "Watching apiserver" Aug 13 08:41:46.852113 kubelet[3075]: I0813 08:41:46.852045 3075 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.852320 kubelet[3075]: I0813 08:41:46.852134 3075 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.852427 kubelet[3075]: I0813 08:41:46.852381 3075 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.859791 kubelet[3075]: W0813 08:41:46.859734 3075 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 08:41:46.859985 kubelet[3075]: E0813 08:41:46.859877 3075 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-a-711ae8cc9f\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.860173 kubelet[3075]: W0813 08:41:46.860072 3075 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 08:41:46.860430 kubelet[3075]: E0813 08:41:46.860193 3075 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-a-711ae8cc9f\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.860430 kubelet[3075]: W0813 08:41:46.860267 3075 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 08:41:46.860430 kubelet[3075]: E0813 08:41:46.860387 3075 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-a-711ae8cc9f\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" Aug 13 08:41:46.873374 kubelet[3075]: I0813 08:41:46.873343 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-a-711ae8cc9f" podStartSLOduration=1.873332951 podStartE2EDuration="1.873332951s" podCreationTimestamp="2025-08-13 08:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 08:41:46.873295693 +0000 UTC m=+1.068536679" watchObservedRunningTime="2025-08-13 08:41:46.873332951 +0000 UTC m=+1.068573933" Aug 13 08:41:46.877172 kubelet[3075]: I0813 08:41:46.877148 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-a-711ae8cc9f" podStartSLOduration=1.877137389 podStartE2EDuration="1.877137389s" podCreationTimestamp="2025-08-13 08:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 08:41:46.877129077 +0000 UTC m=+1.072370065" watchObservedRunningTime="2025-08-13 08:41:46.877137389 +0000 UTC m=+1.072378375" Aug 13 08:41:46.881458 kubelet[3075]: I0813 08:41:46.881420 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-711ae8cc9f" podStartSLOduration=1.881407461 podStartE2EDuration="1.881407461s" podCreationTimestamp="2025-08-13 08:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 08:41:46.881347055 +0000 UTC m=+1.076588040" watchObservedRunningTime="2025-08-13 08:41:46.881407461 +0000 UTC m=+1.076648443" Aug 13 08:41:46.941365 kubelet[3075]: I0813 08:41:46.941348 3075 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 08:41:50.948949 kubelet[3075]: I0813 08:41:50.948873 3075 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 08:41:50.950142 kubelet[3075]: I0813 08:41:50.950052 3075 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 08:41:50.950332 containerd[1817]: time="2025-08-13T08:41:50.949615754Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 08:41:51.633499 systemd[1]: Created slice kubepods-besteffort-podd298e350_8c4d_4e3f_b9b2_fe5bbe08aa0c.slice - libcontainer container kubepods-besteffort-podd298e350_8c4d_4e3f_b9b2_fe5bbe08aa0c.slice. Aug 13 08:41:51.682377 kubelet[3075]: I0813 08:41:51.682260 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c-kube-proxy\") pod \"kube-proxy-sqkbp\" (UID: \"d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c\") " pod="kube-system/kube-proxy-sqkbp" Aug 13 08:41:51.682377 kubelet[3075]: I0813 08:41:51.682364 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c-lib-modules\") pod \"kube-proxy-sqkbp\" (UID: \"d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c\") " pod="kube-system/kube-proxy-sqkbp" Aug 13 08:41:51.682732 kubelet[3075]: I0813 08:41:51.682428 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b584r\" (UniqueName: \"kubernetes.io/projected/d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c-kube-api-access-b584r\") pod \"kube-proxy-sqkbp\" (UID: \"d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c\") " pod="kube-system/kube-proxy-sqkbp" Aug 13 08:41:51.682732 kubelet[3075]: I0813 08:41:51.682488 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c-xtables-lock\") pod \"kube-proxy-sqkbp\" (UID: \"d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c\") " pod="kube-system/kube-proxy-sqkbp" Aug 13 08:41:51.796247 kubelet[3075]: E0813 08:41:51.796155 3075 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 08:41:51.796247 kubelet[3075]: E0813 08:41:51.796245 3075 projected.go:194] Error preparing data for projected volume kube-api-access-b584r for pod kube-system/kube-proxy-sqkbp: configmap "kube-root-ca.crt" not found Aug 13 08:41:51.796761 kubelet[3075]: E0813 08:41:51.796400 3075 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c-kube-api-access-b584r podName:d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c nodeName:}" failed. No retries permitted until 2025-08-13 08:41:52.296348715 +0000 UTC m=+6.491589770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b584r" (UniqueName: "kubernetes.io/projected/d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c-kube-api-access-b584r") pod "kube-proxy-sqkbp" (UID: "d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c") : configmap "kube-root-ca.crt" not found Aug 13 08:41:52.030109 systemd[1]: Created slice kubepods-besteffort-pod535bd065_545c_4532_aee5_13391d31f746.slice - libcontainer container kubepods-besteffort-pod535bd065_545c_4532_aee5_13391d31f746.slice. Aug 13 08:41:52.085311 kubelet[3075]: I0813 08:41:52.085246 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv9h4\" (UniqueName: \"kubernetes.io/projected/535bd065-545c-4532-aee5-13391d31f746-kube-api-access-hv9h4\") pod \"tigera-operator-747864d56d-jsgwz\" (UID: \"535bd065-545c-4532-aee5-13391d31f746\") " pod="tigera-operator/tigera-operator-747864d56d-jsgwz" Aug 13 08:41:52.085311 kubelet[3075]: I0813 08:41:52.085311 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/535bd065-545c-4532-aee5-13391d31f746-var-lib-calico\") pod \"tigera-operator-747864d56d-jsgwz\" (UID: \"535bd065-545c-4532-aee5-13391d31f746\") " pod="tigera-operator/tigera-operator-747864d56d-jsgwz" Aug 13 08:41:52.333607 containerd[1817]: time="2025-08-13T08:41:52.333488320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jsgwz,Uid:535bd065-545c-4532-aee5-13391d31f746,Namespace:tigera-operator,Attempt:0,}" Aug 13 08:41:52.557975 containerd[1817]: time="2025-08-13T08:41:52.557841879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sqkbp,Uid:d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c,Namespace:kube-system,Attempt:0,}" Aug 13 08:41:52.711761 containerd[1817]: time="2025-08-13T08:41:52.711501813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:41:52.711761 containerd[1817]: time="2025-08-13T08:41:52.711703854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:41:52.711761 containerd[1817]: time="2025-08-13T08:41:52.711713566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:52.711893 containerd[1817]: time="2025-08-13T08:41:52.711763111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:52.712349 containerd[1817]: time="2025-08-13T08:41:52.712129376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:41:52.712383 containerd[1817]: time="2025-08-13T08:41:52.712343643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:41:52.712383 containerd[1817]: time="2025-08-13T08:41:52.712354971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:52.712437 containerd[1817]: time="2025-08-13T08:41:52.712397650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:41:52.734471 systemd[1]: Started cri-containerd-9fceea6cbd858210b16956ac4674c5c9996fb6536749f03b86f430b7b52016c8.scope - libcontainer container 9fceea6cbd858210b16956ac4674c5c9996fb6536749f03b86f430b7b52016c8. Aug 13 08:41:52.735325 systemd[1]: Started cri-containerd-ec7bfbd32b3557fb52d695abb1dac54c2e68b6db3b40c7c3e229aaeb8519e4fa.scope - libcontainer container ec7bfbd32b3557fb52d695abb1dac54c2e68b6db3b40c7c3e229aaeb8519e4fa. Aug 13 08:41:52.747011 containerd[1817]: time="2025-08-13T08:41:52.746985947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sqkbp,Uid:d298e350-8c4d-4e3f-b9b2-fe5bbe08aa0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fceea6cbd858210b16956ac4674c5c9996fb6536749f03b86f430b7b52016c8\"" Aug 13 08:41:52.748731 containerd[1817]: time="2025-08-13T08:41:52.748709056Z" level=info msg="CreateContainer within sandbox \"9fceea6cbd858210b16956ac4674c5c9996fb6536749f03b86f430b7b52016c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 08:41:52.755058 containerd[1817]: time="2025-08-13T08:41:52.755042576Z" level=info msg="CreateContainer within sandbox \"9fceea6cbd858210b16956ac4674c5c9996fb6536749f03b86f430b7b52016c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b910a963f48319960b6cd8dc438f0ea170160b5044d5ef54c59d7ed6e256fd51\"" Aug 13 08:41:52.755374 containerd[1817]: time="2025-08-13T08:41:52.755347383Z" level=info msg="StartContainer for \"b910a963f48319960b6cd8dc438f0ea170160b5044d5ef54c59d7ed6e256fd51\"" Aug 13 08:41:52.762831 containerd[1817]: time="2025-08-13T08:41:52.762806175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jsgwz,Uid:535bd065-545c-4532-aee5-13391d31f746,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ec7bfbd32b3557fb52d695abb1dac54c2e68b6db3b40c7c3e229aaeb8519e4fa\"" Aug 13 08:41:52.764487 containerd[1817]: time="2025-08-13T08:41:52.764467297Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 08:41:52.779505 systemd[1]: Started cri-containerd-b910a963f48319960b6cd8dc438f0ea170160b5044d5ef54c59d7ed6e256fd51.scope - libcontainer container b910a963f48319960b6cd8dc438f0ea170160b5044d5ef54c59d7ed6e256fd51. Aug 13 08:41:52.791865 containerd[1817]: time="2025-08-13T08:41:52.791843034Z" level=info msg="StartContainer for \"b910a963f48319960b6cd8dc438f0ea170160b5044d5ef54c59d7ed6e256fd51\" returns successfully" Aug 13 08:41:53.851415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352938509.mount: Deactivated successfully. Aug 13 08:41:54.194888 containerd[1817]: time="2025-08-13T08:41:54.194839474Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:54.195072 containerd[1817]: time="2025-08-13T08:41:54.195044302Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 08:41:54.195445 containerd[1817]: time="2025-08-13T08:41:54.195433121Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:54.196475 containerd[1817]: time="2025-08-13T08:41:54.196462714Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:41:54.196889 containerd[1817]: time="2025-08-13T08:41:54.196877650Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.432388676s" Aug 13 08:41:54.196914 containerd[1817]: time="2025-08-13T08:41:54.196892015Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 08:41:54.197809 containerd[1817]: time="2025-08-13T08:41:54.197794882Z" level=info msg="CreateContainer within sandbox \"ec7bfbd32b3557fb52d695abb1dac54c2e68b6db3b40c7c3e229aaeb8519e4fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 08:41:54.202093 containerd[1817]: time="2025-08-13T08:41:54.202075286Z" level=info msg="CreateContainer within sandbox \"ec7bfbd32b3557fb52d695abb1dac54c2e68b6db3b40c7c3e229aaeb8519e4fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bff21c54f026c9e65816447ed7e8e189f10e43aac7a0ae7f18c1fb33d2d9bb07\"" Aug 13 08:41:54.202355 containerd[1817]: time="2025-08-13T08:41:54.202340873Z" level=info msg="StartContainer for \"bff21c54f026c9e65816447ed7e8e189f10e43aac7a0ae7f18c1fb33d2d9bb07\"" Aug 13 08:41:54.248444 systemd[1]: Started cri-containerd-bff21c54f026c9e65816447ed7e8e189f10e43aac7a0ae7f18c1fb33d2d9bb07.scope - libcontainer container bff21c54f026c9e65816447ed7e8e189f10e43aac7a0ae7f18c1fb33d2d9bb07. Aug 13 08:41:54.261826 containerd[1817]: time="2025-08-13T08:41:54.261802296Z" level=info msg="StartContainer for \"bff21c54f026c9e65816447ed7e8e189f10e43aac7a0ae7f18c1fb33d2d9bb07\" returns successfully" Aug 13 08:41:54.477327 kubelet[3075]: I0813 08:41:54.477066 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sqkbp" podStartSLOduration=3.477026557 podStartE2EDuration="3.477026557s" podCreationTimestamp="2025-08-13 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 08:41:52.889601685 +0000 UTC m=+7.084842695" watchObservedRunningTime="2025-08-13 08:41:54.477026557 +0000 UTC m=+8.672267604" Aug 13 08:41:54.930529 kubelet[3075]: I0813 08:41:54.930422 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-jsgwz" podStartSLOduration=2.497335829 podStartE2EDuration="3.930383434s" podCreationTimestamp="2025-08-13 08:41:51 +0000 UTC" firstStartedPulling="2025-08-13 08:41:52.764220809 +0000 UTC m=+6.959461793" lastFinishedPulling="2025-08-13 08:41:54.197268414 +0000 UTC m=+8.392509398" observedRunningTime="2025-08-13 08:41:54.93029234 +0000 UTC m=+9.125533429" watchObservedRunningTime="2025-08-13 08:41:54.930383434 +0000 UTC m=+9.125624467" Aug 13 08:41:58.845608 sudo[2093]: pam_unix(sudo:session): session closed for user root Aug 13 08:41:58.846770 sshd[2090]: pam_unix(sshd:session): session closed for user core Aug 13 08:41:58.848468 systemd[1]: sshd@8-147.75.71.95:22-147.75.109.163:44896.service: Deactivated successfully. Aug 13 08:41:58.849470 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 08:41:58.849575 systemd[1]: session-11.scope: Consumed 3.016s CPU time, 169.1M memory peak, 0B memory swap peak. Aug 13 08:41:58.850155 systemd-logind[1810]: Session 11 logged out. Waiting for processes to exit. Aug 13 08:41:58.850820 systemd-logind[1810]: Removed session 11. Aug 13 08:41:59.651246 update_engine[1812]: I20250813 08:41:59.651196 1812 update_attempter.cc:509] Updating boot flags... Aug 13 08:41:59.683219 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3611) Aug 13 08:41:59.710192 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3612) Aug 13 08:41:59.730189 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3612) Aug 13 08:42:01.122065 systemd[1]: Created slice kubepods-besteffort-pode3f59b39_b40b_42b2_a583_08f29d590ed4.slice - libcontainer container kubepods-besteffort-pode3f59b39_b40b_42b2_a583_08f29d590ed4.slice. Aug 13 08:42:01.144634 kubelet[3075]: I0813 08:42:01.144601 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bk2k\" (UniqueName: \"kubernetes.io/projected/e3f59b39-b40b-42b2-a583-08f29d590ed4-kube-api-access-6bk2k\") pod \"calico-typha-79c4d9d5f5-lb9tj\" (UID: \"e3f59b39-b40b-42b2-a583-08f29d590ed4\") " pod="calico-system/calico-typha-79c4d9d5f5-lb9tj" Aug 13 08:42:01.144634 kubelet[3075]: I0813 08:42:01.144635 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f59b39-b40b-42b2-a583-08f29d590ed4-tigera-ca-bundle\") pod \"calico-typha-79c4d9d5f5-lb9tj\" (UID: \"e3f59b39-b40b-42b2-a583-08f29d590ed4\") " pod="calico-system/calico-typha-79c4d9d5f5-lb9tj" Aug 13 08:42:01.145034 kubelet[3075]: I0813 08:42:01.144650 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e3f59b39-b40b-42b2-a583-08f29d590ed4-typha-certs\") pod \"calico-typha-79c4d9d5f5-lb9tj\" (UID: \"e3f59b39-b40b-42b2-a583-08f29d590ed4\") " pod="calico-system/calico-typha-79c4d9d5f5-lb9tj" Aug 13 08:42:01.426808 systemd[1]: Created slice kubepods-besteffort-podd9a4a6c1_26a0_4ddf_8faa_d810a5209826.slice - libcontainer container kubepods-besteffort-podd9a4a6c1_26a0_4ddf_8faa_d810a5209826.slice. Aug 13 08:42:01.427287 containerd[1817]: time="2025-08-13T08:42:01.427095858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79c4d9d5f5-lb9tj,Uid:e3f59b39-b40b-42b2-a583-08f29d590ed4,Namespace:calico-system,Attempt:0,}" Aug 13 08:42:01.437587 containerd[1817]: time="2025-08-13T08:42:01.437544493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:01.437780 containerd[1817]: time="2025-08-13T08:42:01.437759390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:01.437805 containerd[1817]: time="2025-08-13T08:42:01.437777110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:01.437839 containerd[1817]: time="2025-08-13T08:42:01.437823333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:01.446418 kubelet[3075]: I0813 08:42:01.446375 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-flexvol-driver-host\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446418 kubelet[3075]: I0813 08:42:01.446393 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-var-lib-calico\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446418 kubelet[3075]: I0813 08:42:01.446405 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-var-run-calico\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446418 kubelet[3075]: I0813 08:42:01.446415 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-node-certs\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446418 kubelet[3075]: I0813 08:42:01.446423 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-xtables-lock\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446555 kubelet[3075]: I0813 08:42:01.446433 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-tigera-ca-bundle\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446555 kubelet[3075]: I0813 08:42:01.446444 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-cni-net-dir\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446555 kubelet[3075]: I0813 08:42:01.446453 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-lib-modules\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446555 kubelet[3075]: I0813 08:42:01.446473 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgpc\" (UniqueName: \"kubernetes.io/projected/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-kube-api-access-2zgpc\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446555 kubelet[3075]: I0813 08:42:01.446497 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-cni-bin-dir\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446645 kubelet[3075]: I0813 08:42:01.446509 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-policysync\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.446645 kubelet[3075]: I0813 08:42:01.446541 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9a4a6c1-26a0-4ddf-8faa-d810a5209826-cni-log-dir\") pod \"calico-node-9mz27\" (UID: \"d9a4a6c1-26a0-4ddf-8faa-d810a5209826\") " pod="calico-system/calico-node-9mz27" Aug 13 08:42:01.456498 systemd[1]: Started cri-containerd-daecd478ac627dd23ad468bcb7bcdebb854d3758c1bc2f7a7407dbaa72bc813a.scope - libcontainer container daecd478ac627dd23ad468bcb7bcdebb854d3758c1bc2f7a7407dbaa72bc813a. Aug 13 08:42:01.479837 containerd[1817]: time="2025-08-13T08:42:01.479814329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79c4d9d5f5-lb9tj,Uid:e3f59b39-b40b-42b2-a583-08f29d590ed4,Namespace:calico-system,Attempt:0,} returns sandbox id \"daecd478ac627dd23ad468bcb7bcdebb854d3758c1bc2f7a7407dbaa72bc813a\"" Aug 13 08:42:01.480499 containerd[1817]: time="2025-08-13T08:42:01.480486718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 08:42:01.550327 kubelet[3075]: E0813 08:42:01.550214 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.550327 kubelet[3075]: W0813 08:42:01.550268 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.550327 kubelet[3075]: E0813 08:42:01.550343 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.555362 kubelet[3075]: E0813 08:42:01.555303 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.555362 kubelet[3075]: W0813 08:42:01.555350 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.555687 kubelet[3075]: E0813 08:42:01.555404 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.567533 kubelet[3075]: E0813 08:42:01.567475 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.567533 kubelet[3075]: W0813 08:42:01.567519 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.567873 kubelet[3075]: E0813 08:42:01.567575 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.652228 kubelet[3075]: E0813 08:42:01.652081 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjlh" podUID="5be6b7a1-9903-4a08-a181-3b1833ebdcac" Aug 13 08:42:01.730815 containerd[1817]: time="2025-08-13T08:42:01.730686579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mz27,Uid:d9a4a6c1-26a0-4ddf-8faa-d810a5209826,Namespace:calico-system,Attempt:0,}" Aug 13 08:42:01.738132 kubelet[3075]: E0813 08:42:01.738110 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.738132 kubelet[3075]: W0813 08:42:01.738128 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.738243 kubelet[3075]: E0813 08:42:01.738147 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.738321 kubelet[3075]: E0813 08:42:01.738278 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.738321 kubelet[3075]: W0813 08:42:01.738286 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.738321 kubelet[3075]: E0813 08:42:01.738294 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.738443 kubelet[3075]: E0813 08:42:01.738407 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.738443 kubelet[3075]: W0813 08:42:01.738415 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.738443 kubelet[3075]: E0813 08:42:01.738422 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.738573 kubelet[3075]: E0813 08:42:01.738565 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.738573 kubelet[3075]: W0813 08:42:01.738573 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.738626 kubelet[3075]: E0813 08:42:01.738581 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.738714 kubelet[3075]: E0813 08:42:01.738708 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.738734 kubelet[3075]: W0813 08:42:01.738715 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.738734 kubelet[3075]: E0813 08:42:01.738722 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.738823 kubelet[3075]: E0813 08:42:01.738816 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.738823 kubelet[3075]: W0813 08:42:01.738823 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.738860 kubelet[3075]: E0813 08:42:01.738830 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.738939 kubelet[3075]: E0813 08:42:01.738933 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.738960 kubelet[3075]: W0813 08:42:01.738940 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.738960 kubelet[3075]: E0813 08:42:01.738948 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.739038 kubelet[3075]: E0813 08:42:01.739033 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.739060 kubelet[3075]: W0813 08:42:01.739039 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.739060 kubelet[3075]: E0813 08:42:01.739046 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.739146 kubelet[3075]: E0813 08:42:01.739140 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.739162 kubelet[3075]: W0813 08:42:01.739147 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.739162 kubelet[3075]: E0813 08:42:01.739154 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.739295 kubelet[3075]: E0813 08:42:01.739289 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.739312 kubelet[3075]: W0813 08:42:01.739296 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.739312 kubelet[3075]: E0813 08:42:01.739303 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.739400 kubelet[3075]: E0813 08:42:01.739394 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.739418 kubelet[3075]: W0813 08:42:01.739402 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.739418 kubelet[3075]: E0813 08:42:01.739411 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.739539 kubelet[3075]: E0813 08:42:01.739533 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.739557 kubelet[3075]: W0813 08:42:01.739540 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.739557 kubelet[3075]: E0813 08:42:01.739547 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.739652 kubelet[3075]: E0813 08:42:01.739646 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.739673 kubelet[3075]: W0813 08:42:01.739653 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.739673 kubelet[3075]: E0813 08:42:01.739660 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.739783 kubelet[3075]: E0813 08:42:01.739774 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.739818 kubelet[3075]: W0813 08:42:01.739783 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.739818 kubelet[3075]: E0813 08:42:01.739793 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.739929 kubelet[3075]: E0813 08:42:01.739910 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.739929 kubelet[3075]: W0813 08:42:01.739917 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.739929 kubelet[3075]: E0813 08:42:01.739925 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.740045 kubelet[3075]: E0813 08:42:01.740036 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.740045 kubelet[3075]: W0813 08:42:01.740043 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.740114 kubelet[3075]: E0813 08:42:01.740051 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.740150 kubelet[3075]: E0813 08:42:01.740145 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.740186 kubelet[3075]: W0813 08:42:01.740150 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.740186 kubelet[3075]: E0813 08:42:01.740155 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.740299 kubelet[3075]: E0813 08:42:01.740290 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.740324 kubelet[3075]: W0813 08:42:01.740299 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.740324 kubelet[3075]: E0813 08:42:01.740311 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.740411 kubelet[3075]: E0813 08:42:01.740404 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.740411 kubelet[3075]: W0813 08:42:01.740409 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.740472 kubelet[3075]: E0813 08:42:01.740414 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.740509 kubelet[3075]: E0813 08:42:01.740498 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.740509 kubelet[3075]: W0813 08:42:01.740503 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.740509 kubelet[3075]: E0813 08:42:01.740508 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.741826 containerd[1817]: time="2025-08-13T08:42:01.741777572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:01.741826 containerd[1817]: time="2025-08-13T08:42:01.741809202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:01.741826 containerd[1817]: time="2025-08-13T08:42:01.741816045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:01.741917 containerd[1817]: time="2025-08-13T08:42:01.741860856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:01.748448 kubelet[3075]: E0813 08:42:01.748431 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.748448 kubelet[3075]: W0813 08:42:01.748445 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.748530 kubelet[3075]: E0813 08:42:01.748457 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.748530 kubelet[3075]: I0813 08:42:01.748475 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n96v6\" (UniqueName: \"kubernetes.io/projected/5be6b7a1-9903-4a08-a181-3b1833ebdcac-kube-api-access-n96v6\") pod \"csi-node-driver-9rjlh\" (UID: \"5be6b7a1-9903-4a08-a181-3b1833ebdcac\") " pod="calico-system/csi-node-driver-9rjlh" Aug 13 08:42:01.748591 kubelet[3075]: E0813 08:42:01.748583 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.748591 kubelet[3075]: W0813 08:42:01.748590 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.748635 kubelet[3075]: E0813 08:42:01.748598 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.748635 kubelet[3075]: I0813 08:42:01.748609 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be6b7a1-9903-4a08-a181-3b1833ebdcac-kubelet-dir\") pod \"csi-node-driver-9rjlh\" (UID: \"5be6b7a1-9903-4a08-a181-3b1833ebdcac\") " pod="calico-system/csi-node-driver-9rjlh" Aug 13 08:42:01.748708 kubelet[3075]: E0813 08:42:01.748703 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.748726 kubelet[3075]: W0813 08:42:01.748708 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.748726 kubelet[3075]: E0813 08:42:01.748715 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.748768 kubelet[3075]: I0813 08:42:01.748724 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5be6b7a1-9903-4a08-a181-3b1833ebdcac-registration-dir\") pod \"csi-node-driver-9rjlh\" (UID: \"5be6b7a1-9903-4a08-a181-3b1833ebdcac\") " pod="calico-system/csi-node-driver-9rjlh" Aug 13 08:42:01.748818 kubelet[3075]: E0813 08:42:01.748812 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.748818 kubelet[3075]: W0813 08:42:01.748817 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.748853 kubelet[3075]: E0813 08:42:01.748822 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.748853 kubelet[3075]: I0813 08:42:01.748830 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5be6b7a1-9903-4a08-a181-3b1833ebdcac-varrun\") pod \"csi-node-driver-9rjlh\" (UID: \"5be6b7a1-9903-4a08-a181-3b1833ebdcac\") " pod="calico-system/csi-node-driver-9rjlh" Aug 13 08:42:01.748953 kubelet[3075]: E0813 08:42:01.748946 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.748977 kubelet[3075]: W0813 08:42:01.748953 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.748977 kubelet[3075]: E0813 08:42:01.748962 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749062 kubelet[3075]: E0813 08:42:01.749056 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749080 kubelet[3075]: W0813 08:42:01.749063 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749080 kubelet[3075]: E0813 08:42:01.749072 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749181 kubelet[3075]: E0813 08:42:01.749173 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749200 kubelet[3075]: W0813 08:42:01.749183 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749200 kubelet[3075]: E0813 08:42:01.749189 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749273 kubelet[3075]: E0813 08:42:01.749268 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749289 kubelet[3075]: W0813 08:42:01.749273 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749289 kubelet[3075]: E0813 08:42:01.749279 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749358 kubelet[3075]: E0813 08:42:01.749354 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749358 kubelet[3075]: W0813 08:42:01.749358 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749392 kubelet[3075]: E0813 08:42:01.749364 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749437 kubelet[3075]: E0813 08:42:01.749432 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749437 kubelet[3075]: W0813 08:42:01.749436 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749472 kubelet[3075]: E0813 08:42:01.749442 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749517 kubelet[3075]: E0813 08:42:01.749512 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749534 kubelet[3075]: W0813 08:42:01.749516 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749534 kubelet[3075]: E0813 08:42:01.749523 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749563 kubelet[3075]: I0813 08:42:01.749535 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5be6b7a1-9903-4a08-a181-3b1833ebdcac-socket-dir\") pod \"csi-node-driver-9rjlh\" (UID: \"5be6b7a1-9903-4a08-a181-3b1833ebdcac\") " pod="calico-system/csi-node-driver-9rjlh" Aug 13 08:42:01.749636 kubelet[3075]: E0813 08:42:01.749630 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749654 kubelet[3075]: W0813 08:42:01.749636 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749654 kubelet[3075]: E0813 08:42:01.749642 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749715 kubelet[3075]: E0813 08:42:01.749710 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749715 kubelet[3075]: W0813 08:42:01.749714 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749752 kubelet[3075]: E0813 08:42:01.749720 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749799 kubelet[3075]: E0813 08:42:01.749794 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749819 kubelet[3075]: W0813 08:42:01.749799 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749819 kubelet[3075]: E0813 08:42:01.749803 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.749879 kubelet[3075]: E0813 08:42:01.749874 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.749896 kubelet[3075]: W0813 08:42:01.749879 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.749896 kubelet[3075]: E0813 08:42:01.749883 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.761398 systemd[1]: Started cri-containerd-6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440.scope - libcontainer container 6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440. Aug 13 08:42:01.771398 containerd[1817]: time="2025-08-13T08:42:01.771376798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mz27,Uid:d9a4a6c1-26a0-4ddf-8faa-d810a5209826,Namespace:calico-system,Attempt:0,} returns sandbox id \"6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440\"" Aug 13 08:42:01.851085 kubelet[3075]: E0813 08:42:01.850991 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.851085 kubelet[3075]: W0813 08:42:01.851033 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.851085 kubelet[3075]: E0813 08:42:01.851076 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.851852 kubelet[3075]: E0813 08:42:01.851770 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.851852 kubelet[3075]: W0813 08:42:01.851812 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.851852 kubelet[3075]: E0813 08:42:01.851862 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.852433 kubelet[3075]: E0813 08:42:01.852332 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.852433 kubelet[3075]: W0813 08:42:01.852358 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.852433 kubelet[3075]: E0813 08:42:01.852391 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.852879 kubelet[3075]: E0813 08:42:01.852809 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.852879 kubelet[3075]: W0813 08:42:01.852833 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.853089 kubelet[3075]: E0813 08:42:01.852889 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.853436 kubelet[3075]: E0813 08:42:01.853363 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.853436 kubelet[3075]: W0813 08:42:01.853390 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.853436 kubelet[3075]: E0813 08:42:01.853425 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.853966 kubelet[3075]: E0813 08:42:01.853916 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.853966 kubelet[3075]: W0813 08:42:01.853942 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.854173 kubelet[3075]: E0813 08:42:01.853977 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.854574 kubelet[3075]: E0813 08:42:01.854503 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.854574 kubelet[3075]: W0813 08:42:01.854531 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.854574 kubelet[3075]: E0813 08:42:01.854567 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.855303 kubelet[3075]: E0813 08:42:01.855244 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.855303 kubelet[3075]: W0813 08:42:01.855282 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.855540 kubelet[3075]: E0813 08:42:01.855327 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.856006 kubelet[3075]: E0813 08:42:01.855950 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.856006 kubelet[3075]: W0813 08:42:01.855987 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.856243 kubelet[3075]: E0813 08:42:01.856030 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.856685 kubelet[3075]: E0813 08:42:01.856627 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.856685 kubelet[3075]: W0813 08:42:01.856665 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.856920 kubelet[3075]: E0813 08:42:01.856781 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.857268 kubelet[3075]: E0813 08:42:01.857229 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.857268 kubelet[3075]: W0813 08:42:01.857264 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.857553 kubelet[3075]: E0813 08:42:01.857372 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.857923 kubelet[3075]: E0813 08:42:01.857852 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.857923 kubelet[3075]: W0813 08:42:01.857904 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.858143 kubelet[3075]: E0813 08:42:01.858038 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.858534 kubelet[3075]: E0813 08:42:01.858497 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.858534 kubelet[3075]: W0813 08:42:01.858529 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.858734 kubelet[3075]: E0813 08:42:01.858639 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.859027 kubelet[3075]: E0813 08:42:01.858998 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.859143 kubelet[3075]: W0813 08:42:01.859026 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.859143 kubelet[3075]: E0813 08:42:01.859104 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.859608 kubelet[3075]: E0813 08:42:01.859551 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.859608 kubelet[3075]: W0813 08:42:01.859579 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.859804 kubelet[3075]: E0813 08:42:01.859686 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.860108 kubelet[3075]: E0813 08:42:01.860080 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.860108 kubelet[3075]: W0813 08:42:01.860107 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.860421 kubelet[3075]: E0813 08:42:01.860202 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.860645 kubelet[3075]: E0813 08:42:01.860600 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.860817 kubelet[3075]: W0813 08:42:01.860646 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.860817 kubelet[3075]: E0813 08:42:01.860738 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.861277 kubelet[3075]: E0813 08:42:01.861241 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.861277 kubelet[3075]: W0813 08:42:01.861270 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.861582 kubelet[3075]: E0813 08:42:01.861381 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.861828 kubelet[3075]: E0813 08:42:01.861774 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.861828 kubelet[3075]: W0813 08:42:01.861806 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.862220 kubelet[3075]: E0813 08:42:01.861873 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.862366 kubelet[3075]: E0813 08:42:01.862341 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.862464 kubelet[3075]: W0813 08:42:01.862373 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.862554 kubelet[3075]: E0813 08:42:01.862458 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.862969 kubelet[3075]: E0813 08:42:01.862929 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.862969 kubelet[3075]: W0813 08:42:01.862957 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.863269 kubelet[3075]: E0813 08:42:01.863040 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.863557 kubelet[3075]: E0813 08:42:01.863518 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.863716 kubelet[3075]: W0813 08:42:01.863554 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.863716 kubelet[3075]: E0813 08:42:01.863641 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.864094 kubelet[3075]: E0813 08:42:01.864060 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.864094 kubelet[3075]: W0813 08:42:01.864088 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.864343 kubelet[3075]: E0813 08:42:01.864151 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.864716 kubelet[3075]: E0813 08:42:01.864671 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.864716 kubelet[3075]: W0813 08:42:01.864699 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.865089 kubelet[3075]: E0813 08:42:01.864738 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.865471 kubelet[3075]: E0813 08:42:01.865384 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.865471 kubelet[3075]: W0813 08:42:01.865418 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.865471 kubelet[3075]: E0813 08:42:01.865451 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:01.882665 kubelet[3075]: E0813 08:42:01.882573 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:01.882665 kubelet[3075]: W0813 08:42:01.882611 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:01.882665 kubelet[3075]: E0813 08:42:01.882650 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:02.848477 kubelet[3075]: E0813 08:42:02.848388 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjlh" podUID="5be6b7a1-9903-4a08-a181-3b1833ebdcac" Aug 13 08:42:03.398912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364706323.mount: Deactivated successfully. Aug 13 08:42:03.986298 containerd[1817]: time="2025-08-13T08:42:03.986244163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:03.986487 containerd[1817]: time="2025-08-13T08:42:03.986404358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 08:42:03.986727 containerd[1817]: time="2025-08-13T08:42:03.986686502Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:03.988053 containerd[1817]: time="2025-08-13T08:42:03.988011220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:03.988352 containerd[1817]: time="2025-08-13T08:42:03.988311301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.507807719s" Aug 13 08:42:03.988352 containerd[1817]: time="2025-08-13T08:42:03.988325346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 08:42:03.988737 containerd[1817]: time="2025-08-13T08:42:03.988702289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 08:42:03.991633 containerd[1817]: time="2025-08-13T08:42:03.991574635Z" level=info msg="CreateContainer within sandbox \"daecd478ac627dd23ad468bcb7bcdebb854d3758c1bc2f7a7407dbaa72bc813a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 08:42:03.995830 containerd[1817]: time="2025-08-13T08:42:03.995791134Z" level=info msg="CreateContainer within sandbox \"daecd478ac627dd23ad468bcb7bcdebb854d3758c1bc2f7a7407dbaa72bc813a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"538806cc19816cd364b36d6ece6922b2bf6664140adb71a1e3192cbb773760ad\"" Aug 13 08:42:03.996002 containerd[1817]: time="2025-08-13T08:42:03.995989022Z" level=info msg="StartContainer for \"538806cc19816cd364b36d6ece6922b2bf6664140adb71a1e3192cbb773760ad\"" Aug 13 08:42:04.015365 systemd[1]: Started cri-containerd-538806cc19816cd364b36d6ece6922b2bf6664140adb71a1e3192cbb773760ad.scope - libcontainer container 538806cc19816cd364b36d6ece6922b2bf6664140adb71a1e3192cbb773760ad. Aug 13 08:42:04.042171 containerd[1817]: time="2025-08-13T08:42:04.042146419Z" level=info msg="StartContainer for \"538806cc19816cd364b36d6ece6922b2bf6664140adb71a1e3192cbb773760ad\" returns successfully" Aug 13 08:42:04.848147 kubelet[3075]: E0813 08:42:04.848018 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjlh" podUID="5be6b7a1-9903-4a08-a181-3b1833ebdcac" Aug 13 08:42:04.932662 kubelet[3075]: I0813 08:42:04.932540 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79c4d9d5f5-lb9tj" podStartSLOduration=1.424207129 podStartE2EDuration="3.932501593s" podCreationTimestamp="2025-08-13 08:42:01 +0000 UTC" firstStartedPulling="2025-08-13 08:42:01.48035012 +0000 UTC m=+15.675591104" lastFinishedPulling="2025-08-13 08:42:03.988644584 +0000 UTC m=+18.183885568" observedRunningTime="2025-08-13 08:42:04.932264711 +0000 UTC m=+19.127505765" watchObservedRunningTime="2025-08-13 08:42:04.932501593 +0000 UTC m=+19.127742631" Aug 13 08:42:04.963130 kubelet[3075]: E0813 08:42:04.963073 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.963431 kubelet[3075]: W0813 08:42:04.963134 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.963431 kubelet[3075]: E0813 08:42:04.963212 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.963840 kubelet[3075]: E0813 08:42:04.963794 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.963840 kubelet[3075]: W0813 08:42:04.963830 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.964166 kubelet[3075]: E0813 08:42:04.963865 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.964541 kubelet[3075]: E0813 08:42:04.964464 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.964541 kubelet[3075]: W0813 08:42:04.964503 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.964541 kubelet[3075]: E0813 08:42:04.964538 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.965278 kubelet[3075]: E0813 08:42:04.965234 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.965278 kubelet[3075]: W0813 08:42:04.965265 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.965608 kubelet[3075]: E0813 08:42:04.965297 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.965946 kubelet[3075]: E0813 08:42:04.965884 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.965946 kubelet[3075]: W0813 08:42:04.965930 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.966248 kubelet[3075]: E0813 08:42:04.965979 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.966647 kubelet[3075]: E0813 08:42:04.966605 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.966815 kubelet[3075]: W0813 08:42:04.966646 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.966815 kubelet[3075]: E0813 08:42:04.966689 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.967303 kubelet[3075]: E0813 08:42:04.967246 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.967303 kubelet[3075]: W0813 08:42:04.967274 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.967303 kubelet[3075]: E0813 08:42:04.967303 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.967860 kubelet[3075]: E0813 08:42:04.967805 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.967860 kubelet[3075]: W0813 08:42:04.967832 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.967860 kubelet[3075]: E0813 08:42:04.967860 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.968375 kubelet[3075]: E0813 08:42:04.968317 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.968375 kubelet[3075]: W0813 08:42:04.968345 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.968375 kubelet[3075]: E0813 08:42:04.968372 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.968885 kubelet[3075]: E0813 08:42:04.968831 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.969011 kubelet[3075]: W0813 08:42:04.968895 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.969011 kubelet[3075]: E0813 08:42:04.968926 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.969497 kubelet[3075]: E0813 08:42:04.969449 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.969497 kubelet[3075]: W0813 08:42:04.969477 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.969744 kubelet[3075]: E0813 08:42:04.969504 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.970047 kubelet[3075]: E0813 08:42:04.969992 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.970047 kubelet[3075]: W0813 08:42:04.970018 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.970047 kubelet[3075]: E0813 08:42:04.970045 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.970656 kubelet[3075]: E0813 08:42:04.970602 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.970656 kubelet[3075]: W0813 08:42:04.970629 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.970656 kubelet[3075]: E0813 08:42:04.970656 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.971244 kubelet[3075]: E0813 08:42:04.971162 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.971244 kubelet[3075]: W0813 08:42:04.971211 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.971244 kubelet[3075]: E0813 08:42:04.971240 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.971792 kubelet[3075]: E0813 08:42:04.971740 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.971792 kubelet[3075]: W0813 08:42:04.971766 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.971792 kubelet[3075]: E0813 08:42:04.971792 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.983293 kubelet[3075]: E0813 08:42:04.983243 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.983293 kubelet[3075]: W0813 08:42:04.983287 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.983598 kubelet[3075]: E0813 08:42:04.983323 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.983986 kubelet[3075]: E0813 08:42:04.983903 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.983986 kubelet[3075]: W0813 08:42:04.983940 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.983986 kubelet[3075]: E0813 08:42:04.983983 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.984728 kubelet[3075]: E0813 08:42:04.984640 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.984728 kubelet[3075]: W0813 08:42:04.984683 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.984728 kubelet[3075]: E0813 08:42:04.984728 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.985355 kubelet[3075]: E0813 08:42:04.985284 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.985355 kubelet[3075]: W0813 08:42:04.985314 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.985355 kubelet[3075]: E0813 08:42:04.985349 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.985991 kubelet[3075]: E0813 08:42:04.985904 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.985991 kubelet[3075]: W0813 08:42:04.985943 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.986320 kubelet[3075]: E0813 08:42:04.986048 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.986636 kubelet[3075]: E0813 08:42:04.986550 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.986636 kubelet[3075]: W0813 08:42:04.986588 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.986918 kubelet[3075]: E0813 08:42:04.986708 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.987270 kubelet[3075]: E0813 08:42:04.987163 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.987270 kubelet[3075]: W0813 08:42:04.987217 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.987578 kubelet[3075]: E0813 08:42:04.987341 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.987840 kubelet[3075]: E0813 08:42:04.987783 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.987840 kubelet[3075]: W0813 08:42:04.987812 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.988054 kubelet[3075]: E0813 08:42:04.987925 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.988312 kubelet[3075]: E0813 08:42:04.988283 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.988312 kubelet[3075]: W0813 08:42:04.988310 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.988531 kubelet[3075]: E0813 08:42:04.988344 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.988910 kubelet[3075]: E0813 08:42:04.988880 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.989016 kubelet[3075]: W0813 08:42:04.988909 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.989110 kubelet[3075]: E0813 08:42:04.989032 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.989469 kubelet[3075]: E0813 08:42:04.989442 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.989598 kubelet[3075]: W0813 08:42:04.989475 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.989703 kubelet[3075]: E0813 08:42:04.989588 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.989979 kubelet[3075]: E0813 08:42:04.989903 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.989979 kubelet[3075]: W0813 08:42:04.989930 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.990246 kubelet[3075]: E0813 08:42:04.990023 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.990699 kubelet[3075]: E0813 08:42:04.990610 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.990699 kubelet[3075]: W0813 08:42:04.990637 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.990699 kubelet[3075]: E0813 08:42:04.990671 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.991277 kubelet[3075]: E0813 08:42:04.991207 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.991277 kubelet[3075]: W0813 08:42:04.991233 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.991563 kubelet[3075]: E0813 08:42:04.991321 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.991800 kubelet[3075]: E0813 08:42:04.991725 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.991800 kubelet[3075]: W0813 08:42:04.991752 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.991800 kubelet[3075]: E0813 08:42:04.991780 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.992322 kubelet[3075]: E0813 08:42:04.992250 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.992322 kubelet[3075]: W0813 08:42:04.992277 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.992322 kubelet[3075]: E0813 08:42:04.992307 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.992961 kubelet[3075]: E0813 08:42:04.992888 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.992961 kubelet[3075]: W0813 08:42:04.992922 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.992961 kubelet[3075]: E0813 08:42:04.992956 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:04.993481 kubelet[3075]: E0813 08:42:04.993450 3075 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 08:42:04.993481 kubelet[3075]: W0813 08:42:04.993479 3075 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 08:42:04.993664 kubelet[3075]: E0813 08:42:04.993509 3075 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 08:42:05.728360 containerd[1817]: time="2025-08-13T08:42:05.728306772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:05.728584 containerd[1817]: time="2025-08-13T08:42:05.728531706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 08:42:05.728857 containerd[1817]: time="2025-08-13T08:42:05.728844990Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:05.729775 containerd[1817]: time="2025-08-13T08:42:05.729760563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:05.730183 containerd[1817]: time="2025-08-13T08:42:05.730164200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.741446406s" Aug 13 08:42:05.730248 containerd[1817]: time="2025-08-13T08:42:05.730186101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 08:42:05.731321 containerd[1817]: time="2025-08-13T08:42:05.731309366Z" level=info msg="CreateContainer within sandbox \"6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 08:42:05.736041 containerd[1817]: time="2025-08-13T08:42:05.735995929Z" level=info msg="CreateContainer within sandbox \"6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537\"" Aug 13 08:42:05.736255 containerd[1817]: time="2025-08-13T08:42:05.736181580Z" level=info msg="StartContainer for \"54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537\"" Aug 13 08:42:05.763364 systemd[1]: Started cri-containerd-54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537.scope - libcontainer container 54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537. Aug 13 08:42:05.776790 containerd[1817]: time="2025-08-13T08:42:05.776761216Z" level=info msg="StartContainer for \"54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537\" returns successfully" Aug 13 08:42:05.782658 systemd[1]: cri-containerd-54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537.scope: Deactivated successfully. Aug 13 08:42:05.798603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537-rootfs.mount: Deactivated successfully. Aug 13 08:42:05.918332 kubelet[3075]: I0813 08:42:05.918242 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:42:06.238651 containerd[1817]: time="2025-08-13T08:42:06.238601592Z" level=info msg="shim disconnected" id=54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537 namespace=k8s.io Aug 13 08:42:06.238651 containerd[1817]: time="2025-08-13T08:42:06.238649186Z" level=warning msg="cleaning up after shim disconnected" id=54da7bccd50dca138c7936258362ffbf08de6bf54d6952ad805c79c52ea50537 namespace=k8s.io Aug 13 08:42:06.238651 containerd[1817]: time="2025-08-13T08:42:06.238655291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 08:42:06.848283 kubelet[3075]: E0813 08:42:06.848142 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjlh" podUID="5be6b7a1-9903-4a08-a181-3b1833ebdcac" Aug 13 08:42:06.925618 containerd[1817]: time="2025-08-13T08:42:06.925565831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 08:42:08.848128 kubelet[3075]: E0813 08:42:08.848052 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjlh" podUID="5be6b7a1-9903-4a08-a181-3b1833ebdcac" Aug 13 08:42:10.469986 containerd[1817]: time="2025-08-13T08:42:10.469961699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:10.470200 containerd[1817]: time="2025-08-13T08:42:10.470162308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 08:42:10.470547 containerd[1817]: time="2025-08-13T08:42:10.470507940Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:10.471508 containerd[1817]: time="2025-08-13T08:42:10.471467166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:10.472227 containerd[1817]: time="2025-08-13T08:42:10.472163183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.546544523s" Aug 13 08:42:10.472227 containerd[1817]: time="2025-08-13T08:42:10.472185198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 08:42:10.473120 containerd[1817]: time="2025-08-13T08:42:10.473109631Z" level=info msg="CreateContainer within sandbox \"6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 08:42:10.477423 containerd[1817]: time="2025-08-13T08:42:10.477381087Z" level=info msg="CreateContainer within sandbox \"6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49\"" Aug 13 08:42:10.477593 containerd[1817]: time="2025-08-13T08:42:10.477582209Z" level=info msg="StartContainer for \"0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49\"" Aug 13 08:42:10.503355 systemd[1]: Started cri-containerd-0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49.scope - libcontainer container 0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49. Aug 13 08:42:10.516512 containerd[1817]: time="2025-08-13T08:42:10.516485915Z" level=info msg="StartContainer for \"0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49\" returns successfully" Aug 13 08:42:10.847962 kubelet[3075]: E0813 08:42:10.847881 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjlh" podUID="5be6b7a1-9903-4a08-a181-3b1833ebdcac" Aug 13 08:42:11.084482 containerd[1817]: time="2025-08-13T08:42:11.084458747Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 08:42:11.085451 systemd[1]: cri-containerd-0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49.scope: Deactivated successfully. Aug 13 08:42:11.094681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49-rootfs.mount: Deactivated successfully. Aug 13 08:42:11.162968 kubelet[3075]: I0813 08:42:11.162735 3075 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 08:42:11.219215 systemd[1]: Created slice kubepods-burstable-pod1b313c34_ec50_4ab0_8732_819433f220ba.slice - libcontainer container kubepods-burstable-pod1b313c34_ec50_4ab0_8732_819433f220ba.slice. Aug 13 08:42:11.232929 systemd[1]: Created slice kubepods-besteffort-pode4fc3133_4463_408d_975f_642106e09eac.slice - libcontainer container kubepods-besteffort-pode4fc3133_4463_408d_975f_642106e09eac.slice. Aug 13 08:42:11.234713 kubelet[3075]: I0813 08:42:11.234658 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7-calico-apiserver-certs\") pod \"calico-apiserver-658b4bf6b-zt564\" (UID: \"5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7\") " pod="calico-apiserver/calico-apiserver-658b4bf6b-zt564" Aug 13 08:42:11.234872 kubelet[3075]: I0813 08:42:11.234766 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4fc3133-4463-408d-975f-642106e09eac-tigera-ca-bundle\") pod \"calico-kube-controllers-6468cf5bfc-jkn9n\" (UID: \"e4fc3133-4463-408d-975f-642106e09eac\") " pod="calico-system/calico-kube-controllers-6468cf5bfc-jkn9n" Aug 13 08:42:11.234872 kubelet[3075]: I0813 08:42:11.234850 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72f8g\" (UniqueName: \"kubernetes.io/projected/e4fc3133-4463-408d-975f-642106e09eac-kube-api-access-72f8g\") pod \"calico-kube-controllers-6468cf5bfc-jkn9n\" (UID: \"e4fc3133-4463-408d-975f-642106e09eac\") " pod="calico-system/calico-kube-controllers-6468cf5bfc-jkn9n" Aug 13 08:42:11.235084 kubelet[3075]: I0813 08:42:11.234904 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7945323e-4111-40cf-841e-fe297ec96877-whisker-ca-bundle\") pod \"whisker-69cffdcfc8-5ltj4\" (UID: \"7945323e-4111-40cf-841e-fe297ec96877\") " pod="calico-system/whisker-69cffdcfc8-5ltj4" Aug 13 08:42:11.235084 kubelet[3075]: I0813 08:42:11.235011 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b313c34-ec50-4ab0-8732-819433f220ba-config-volume\") pod \"coredns-668d6bf9bc-bqv6d\" (UID: \"1b313c34-ec50-4ab0-8732-819433f220ba\") " pod="kube-system/coredns-668d6bf9bc-bqv6d" Aug 13 08:42:11.235290 kubelet[3075]: I0813 08:42:11.235098 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jw6\" (UniqueName: \"kubernetes.io/projected/5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7-kube-api-access-f6jw6\") pod \"calico-apiserver-658b4bf6b-zt564\" (UID: \"5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7\") " pod="calico-apiserver/calico-apiserver-658b4bf6b-zt564" Aug 13 08:42:11.235290 kubelet[3075]: I0813 08:42:11.235160 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7945323e-4111-40cf-841e-fe297ec96877-whisker-backend-key-pair\") pod \"whisker-69cffdcfc8-5ltj4\" (UID: \"7945323e-4111-40cf-841e-fe297ec96877\") " pod="calico-system/whisker-69cffdcfc8-5ltj4" Aug 13 08:42:11.235290 kubelet[3075]: I0813 08:42:11.235258 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1d73c1f7-7407-44b3-82c7-2812b426db1e-calico-apiserver-certs\") pod \"calico-apiserver-658b4bf6b-bvb5g\" (UID: \"1d73c1f7-7407-44b3-82c7-2812b426db1e\") " pod="calico-apiserver/calico-apiserver-658b4bf6b-bvb5g" Aug 13 08:42:11.235540 kubelet[3075]: I0813 08:42:11.235319 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bcdd4219-dd3a-4d45-91ce-fcd49d31398a-goldmane-key-pair\") pod \"goldmane-768f4c5c69-fddzn\" (UID: \"bcdd4219-dd3a-4d45-91ce-fcd49d31398a\") " pod="calico-system/goldmane-768f4c5c69-fddzn" Aug 13 08:42:11.235540 kubelet[3075]: I0813 08:42:11.235374 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2sb\" (UniqueName: \"kubernetes.io/projected/bcdd4219-dd3a-4d45-91ce-fcd49d31398a-kube-api-access-kn2sb\") pod \"goldmane-768f4c5c69-fddzn\" (UID: \"bcdd4219-dd3a-4d45-91ce-fcd49d31398a\") " pod="calico-system/goldmane-768f4c5c69-fddzn" Aug 13 08:42:11.235540 kubelet[3075]: I0813 08:42:11.235469 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcdd4219-dd3a-4d45-91ce-fcd49d31398a-config\") pod \"goldmane-768f4c5c69-fddzn\" (UID: \"bcdd4219-dd3a-4d45-91ce-fcd49d31398a\") " pod="calico-system/goldmane-768f4c5c69-fddzn" Aug 13 08:42:11.235785 kubelet[3075]: I0813 08:42:11.235622 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbg4z\" (UniqueName: \"kubernetes.io/projected/7945323e-4111-40cf-841e-fe297ec96877-kube-api-access-wbg4z\") pod \"whisker-69cffdcfc8-5ltj4\" (UID: \"7945323e-4111-40cf-841e-fe297ec96877\") " pod="calico-system/whisker-69cffdcfc8-5ltj4" Aug 13 08:42:11.235785 kubelet[3075]: I0813 08:42:11.235704 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcdd4219-dd3a-4d45-91ce-fcd49d31398a-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-fddzn\" (UID: \"bcdd4219-dd3a-4d45-91ce-fcd49d31398a\") " pod="calico-system/goldmane-768f4c5c69-fddzn" Aug 13 08:42:11.235785 kubelet[3075]: I0813 08:42:11.235752 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvkv\" (UniqueName: \"kubernetes.io/projected/9ca82ef8-ce0c-4a00-987e-b04a91763a60-kube-api-access-mfvkv\") pod \"coredns-668d6bf9bc-wdzjv\" (UID: \"9ca82ef8-ce0c-4a00-987e-b04a91763a60\") " pod="kube-system/coredns-668d6bf9bc-wdzjv" Aug 13 08:42:11.236035 kubelet[3075]: I0813 08:42:11.235835 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz4pm\" (UniqueName: \"kubernetes.io/projected/1d73c1f7-7407-44b3-82c7-2812b426db1e-kube-api-access-wz4pm\") pod \"calico-apiserver-658b4bf6b-bvb5g\" (UID: \"1d73c1f7-7407-44b3-82c7-2812b426db1e\") " pod="calico-apiserver/calico-apiserver-658b4bf6b-bvb5g" Aug 13 08:42:11.236035 kubelet[3075]: I0813 08:42:11.235893 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ca82ef8-ce0c-4a00-987e-b04a91763a60-config-volume\") pod \"coredns-668d6bf9bc-wdzjv\" (UID: \"9ca82ef8-ce0c-4a00-987e-b04a91763a60\") " pod="kube-system/coredns-668d6bf9bc-wdzjv" Aug 13 08:42:11.236035 kubelet[3075]: I0813 08:42:11.235962 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hrt9\" (UniqueName: \"kubernetes.io/projected/1b313c34-ec50-4ab0-8732-819433f220ba-kube-api-access-5hrt9\") pod \"coredns-668d6bf9bc-bqv6d\" (UID: \"1b313c34-ec50-4ab0-8732-819433f220ba\") " pod="kube-system/coredns-668d6bf9bc-bqv6d" Aug 13 08:42:11.242506 systemd[1]: Created slice kubepods-burstable-pod9ca82ef8_ce0c_4a00_987e_b04a91763a60.slice - libcontainer container kubepods-burstable-pod9ca82ef8_ce0c_4a00_987e_b04a91763a60.slice. Aug 13 08:42:11.248166 systemd[1]: Created slice kubepods-besteffort-pod5106c9bf_90a4_4cd5_ab2d_ed67b58da6b7.slice - libcontainer container kubepods-besteffort-pod5106c9bf_90a4_4cd5_ab2d_ed67b58da6b7.slice. Aug 13 08:42:11.252527 systemd[1]: Created slice kubepods-besteffort-pod1d73c1f7_7407_44b3_82c7_2812b426db1e.slice - libcontainer container kubepods-besteffort-pod1d73c1f7_7407_44b3_82c7_2812b426db1e.slice. Aug 13 08:42:11.256762 systemd[1]: Created slice kubepods-besteffort-podbcdd4219_dd3a_4d45_91ce_fcd49d31398a.slice - libcontainer container kubepods-besteffort-podbcdd4219_dd3a_4d45_91ce_fcd49d31398a.slice. Aug 13 08:42:11.260233 systemd[1]: Created slice kubepods-besteffort-pod7945323e_4111_40cf_841e_fe297ec96877.slice - libcontainer container kubepods-besteffort-pod7945323e_4111_40cf_841e_fe297ec96877.slice. Aug 13 08:42:11.461160 containerd[1817]: time="2025-08-13T08:42:11.461086464Z" level=info msg="shim disconnected" id=0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49 namespace=k8s.io Aug 13 08:42:11.461160 containerd[1817]: time="2025-08-13T08:42:11.461120465Z" level=warning msg="cleaning up after shim disconnected" id=0ea17c501272ba2f80fa52debce5cbffa8dc747e0b24a9615976f47fb4e5cb49 namespace=k8s.io Aug 13 08:42:11.461160 containerd[1817]: time="2025-08-13T08:42:11.461127807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 08:42:11.526682 containerd[1817]: time="2025-08-13T08:42:11.526564359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqv6d,Uid:1b313c34-ec50-4ab0-8732-819433f220ba,Namespace:kube-system,Attempt:0,}" Aug 13 08:42:11.538130 containerd[1817]: time="2025-08-13T08:42:11.538106986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6468cf5bfc-jkn9n,Uid:e4fc3133-4463-408d-975f-642106e09eac,Namespace:calico-system,Attempt:0,}" Aug 13 08:42:11.545682 containerd[1817]: time="2025-08-13T08:42:11.545658634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wdzjv,Uid:9ca82ef8-ce0c-4a00-987e-b04a91763a60,Namespace:kube-system,Attempt:0,}" Aug 13 08:42:11.551207 containerd[1817]: time="2025-08-13T08:42:11.551165637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658b4bf6b-zt564,Uid:5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7,Namespace:calico-apiserver,Attempt:0,}" Aug 13 08:42:11.554724 containerd[1817]: time="2025-08-13T08:42:11.554696307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658b4bf6b-bvb5g,Uid:1d73c1f7-7407-44b3-82c7-2812b426db1e,Namespace:calico-apiserver,Attempt:0,}" Aug 13 08:42:11.555969 containerd[1817]: time="2025-08-13T08:42:11.555944743Z" level=error msg="Failed to destroy network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.556169 containerd[1817]: time="2025-08-13T08:42:11.556151042Z" level=error msg="encountered an error cleaning up failed sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.556450 containerd[1817]: time="2025-08-13T08:42:11.556195562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqv6d,Uid:1b313c34-ec50-4ab0-8732-819433f220ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.556474 kubelet[3075]: E0813 08:42:11.556326 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.556474 kubelet[3075]: E0813 08:42:11.556373 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bqv6d" Aug 13 08:42:11.556474 kubelet[3075]: E0813 08:42:11.556387 3075 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bqv6d" Aug 13 08:42:11.556551 kubelet[3075]: E0813 08:42:11.556420 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bqv6d_kube-system(1b313c34-ec50-4ab0-8732-819433f220ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bqv6d_kube-system(1b313c34-ec50-4ab0-8732-819433f220ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bqv6d" podUID="1b313c34-ec50-4ab0-8732-819433f220ba" Aug 13 08:42:11.558738 containerd[1817]: time="2025-08-13T08:42:11.558716796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fddzn,Uid:bcdd4219-dd3a-4d45-91ce-fcd49d31398a,Namespace:calico-system,Attempt:0,}" Aug 13 08:42:11.562550 containerd[1817]: time="2025-08-13T08:42:11.562523211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cffdcfc8-5ltj4,Uid:7945323e-4111-40cf-841e-fe297ec96877,Namespace:calico-system,Attempt:0,}" Aug 13 08:42:11.567418 containerd[1817]: time="2025-08-13T08:42:11.567390732Z" level=error msg="Failed to destroy network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.567620 containerd[1817]: time="2025-08-13T08:42:11.567602922Z" level=error msg="encountered an error cleaning up failed sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.567667 containerd[1817]: time="2025-08-13T08:42:11.567637774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6468cf5bfc-jkn9n,Uid:e4fc3133-4463-408d-975f-642106e09eac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.567823 kubelet[3075]: E0813 08:42:11.567795 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.567859 kubelet[3075]: E0813 08:42:11.567847 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6468cf5bfc-jkn9n" Aug 13 08:42:11.567881 kubelet[3075]: E0813 08:42:11.567867 3075 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6468cf5bfc-jkn9n" Aug 13 08:42:11.567924 kubelet[3075]: E0813 08:42:11.567904 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6468cf5bfc-jkn9n_calico-system(e4fc3133-4463-408d-975f-642106e09eac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6468cf5bfc-jkn9n_calico-system(e4fc3133-4463-408d-975f-642106e09eac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6468cf5bfc-jkn9n" podUID="e4fc3133-4463-408d-975f-642106e09eac" Aug 13 08:42:11.577184 containerd[1817]: time="2025-08-13T08:42:11.577138583Z" level=error msg="Failed to destroy network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.577406 containerd[1817]: time="2025-08-13T08:42:11.577384147Z" level=error msg="encountered an error cleaning up failed sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.577450 containerd[1817]: time="2025-08-13T08:42:11.577423972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wdzjv,Uid:9ca82ef8-ce0c-4a00-987e-b04a91763a60,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.577596 kubelet[3075]: E0813 08:42:11.577574 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.577651 kubelet[3075]: E0813 08:42:11.577611 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wdzjv" Aug 13 08:42:11.577651 kubelet[3075]: E0813 08:42:11.577625 3075 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wdzjv" Aug 13 08:42:11.577721 kubelet[3075]: E0813 08:42:11.577652 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wdzjv_kube-system(9ca82ef8-ce0c-4a00-987e-b04a91763a60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wdzjv_kube-system(9ca82ef8-ce0c-4a00-987e-b04a91763a60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wdzjv" podUID="9ca82ef8-ce0c-4a00-987e-b04a91763a60" Aug 13 08:42:11.581003 containerd[1817]: time="2025-08-13T08:42:11.580968143Z" level=error msg="Failed to destroy network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.581222 containerd[1817]: time="2025-08-13T08:42:11.581201123Z" level=error msg="encountered an error cleaning up failed sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.581268 containerd[1817]: time="2025-08-13T08:42:11.581242584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658b4bf6b-zt564,Uid:5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.581415 kubelet[3075]: E0813 08:42:11.581383 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.581457 kubelet[3075]: E0813 08:42:11.581437 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-658b4bf6b-zt564" Aug 13 08:42:11.581494 kubelet[3075]: E0813 08:42:11.581458 3075 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-658b4bf6b-zt564" Aug 13 08:42:11.581539 kubelet[3075]: E0813 08:42:11.581517 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-658b4bf6b-zt564_calico-apiserver(5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-658b4bf6b-zt564_calico-apiserver(5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-658b4bf6b-zt564" podUID="5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7" Aug 13 08:42:11.585694 containerd[1817]: time="2025-08-13T08:42:11.585662460Z" level=error msg="Failed to destroy network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.585871 containerd[1817]: time="2025-08-13T08:42:11.585849194Z" level=error msg="encountered an error cleaning up failed sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.585901 containerd[1817]: time="2025-08-13T08:42:11.585889799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658b4bf6b-bvb5g,Uid:1d73c1f7-7407-44b3-82c7-2812b426db1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.586049 kubelet[3075]: E0813 08:42:11.586023 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.586079 kubelet[3075]: E0813 08:42:11.586067 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-658b4bf6b-bvb5g" Aug 13 08:42:11.586101 kubelet[3075]: E0813 08:42:11.586082 3075 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-658b4bf6b-bvb5g" Aug 13 08:42:11.586125 kubelet[3075]: E0813 08:42:11.586110 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-658b4bf6b-bvb5g_calico-apiserver(1d73c1f7-7407-44b3-82c7-2812b426db1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-658b4bf6b-bvb5g_calico-apiserver(1d73c1f7-7407-44b3-82c7-2812b426db1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-658b4bf6b-bvb5g" podUID="1d73c1f7-7407-44b3-82c7-2812b426db1e" Aug 13 08:42:11.587856 containerd[1817]: time="2025-08-13T08:42:11.587840872Z" level=error msg="Failed to destroy network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.587993 containerd[1817]: time="2025-08-13T08:42:11.587981810Z" level=error msg="encountered an error cleaning up failed sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.588018 containerd[1817]: time="2025-08-13T08:42:11.588007856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fddzn,Uid:bcdd4219-dd3a-4d45-91ce-fcd49d31398a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.588091 kubelet[3075]: E0813 08:42:11.588080 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.588114 kubelet[3075]: E0813 08:42:11.588099 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fddzn" Aug 13 08:42:11.588114 kubelet[3075]: E0813 08:42:11.588109 3075 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fddzn" Aug 13 08:42:11.588153 kubelet[3075]: E0813 08:42:11.588131 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-fddzn_calico-system(bcdd4219-dd3a-4d45-91ce-fcd49d31398a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-fddzn_calico-system(bcdd4219-dd3a-4d45-91ce-fcd49d31398a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-fddzn" podUID="bcdd4219-dd3a-4d45-91ce-fcd49d31398a" Aug 13 08:42:11.590870 containerd[1817]: time="2025-08-13T08:42:11.590831584Z" level=error msg="Failed to destroy network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.591011 containerd[1817]: time="2025-08-13T08:42:11.590974333Z" level=error msg="encountered an error cleaning up failed sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.591011 containerd[1817]: time="2025-08-13T08:42:11.590994888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cffdcfc8-5ltj4,Uid:7945323e-4111-40cf-841e-fe297ec96877,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.591067 kubelet[3075]: E0813 08:42:11.591056 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.591088 kubelet[3075]: E0813 08:42:11.591078 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69cffdcfc8-5ltj4" Aug 13 08:42:11.591105 kubelet[3075]: E0813 08:42:11.591091 3075 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69cffdcfc8-5ltj4" Aug 13 08:42:11.591124 kubelet[3075]: E0813 08:42:11.591111 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69cffdcfc8-5ltj4_calico-system(7945323e-4111-40cf-841e-fe297ec96877)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69cffdcfc8-5ltj4_calico-system(7945323e-4111-40cf-841e-fe297ec96877)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69cffdcfc8-5ltj4" podUID="7945323e-4111-40cf-841e-fe297ec96877" Aug 13 08:42:11.941981 kubelet[3075]: I0813 08:42:11.941923 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:11.943637 containerd[1817]: time="2025-08-13T08:42:11.943548062Z" level=info msg="StopPodSandbox for \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\"" Aug 13 08:42:11.944114 containerd[1817]: time="2025-08-13T08:42:11.944054479Z" level=info msg="Ensure that sandbox 83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21 in task-service has been cleanup successfully" Aug 13 08:42:11.944467 kubelet[3075]: I0813 08:42:11.944417 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:11.945621 containerd[1817]: time="2025-08-13T08:42:11.945542119Z" level=info msg="StopPodSandbox for \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\"" Aug 13 08:42:11.946090 containerd[1817]: time="2025-08-13T08:42:11.946026338Z" level=info msg="Ensure that sandbox 5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2 in task-service has been cleanup successfully" Aug 13 08:42:11.946542 kubelet[3075]: I0813 08:42:11.946491 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:11.947754 containerd[1817]: time="2025-08-13T08:42:11.947674927Z" level=info msg="StopPodSandbox for \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\"" Aug 13 08:42:11.948322 containerd[1817]: time="2025-08-13T08:42:11.948234111Z" level=info msg="Ensure that sandbox a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9 in task-service has been cleanup successfully" Aug 13 08:42:11.953590 containerd[1817]: time="2025-08-13T08:42:11.953496606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 08:42:11.953895 kubelet[3075]: I0813 08:42:11.953674 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:11.954684 containerd[1817]: time="2025-08-13T08:42:11.954659593Z" level=info msg="StopPodSandbox for \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\"" Aug 13 08:42:11.954855 containerd[1817]: time="2025-08-13T08:42:11.954838133Z" level=info msg="Ensure that sandbox 4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde in task-service has been cleanup successfully" Aug 13 08:42:11.955038 kubelet[3075]: I0813 08:42:11.955025 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:11.955432 containerd[1817]: time="2025-08-13T08:42:11.955410826Z" level=info msg="StopPodSandbox for \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\"" Aug 13 08:42:11.955536 containerd[1817]: time="2025-08-13T08:42:11.955521074Z" level=info msg="Ensure that sandbox e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a in task-service has been cleanup successfully" Aug 13 08:42:11.955763 kubelet[3075]: I0813 08:42:11.955748 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:11.956115 containerd[1817]: time="2025-08-13T08:42:11.956095526Z" level=info msg="StopPodSandbox for \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\"" Aug 13 08:42:11.956242 containerd[1817]: time="2025-08-13T08:42:11.956225935Z" level=info msg="Ensure that sandbox 00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841 in task-service has been cleanup successfully" Aug 13 08:42:11.956550 kubelet[3075]: I0813 08:42:11.956536 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:11.956923 containerd[1817]: time="2025-08-13T08:42:11.956900113Z" level=info msg="StopPodSandbox for \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\"" Aug 13 08:42:11.957044 containerd[1817]: time="2025-08-13T08:42:11.957031942Z" level=info msg="Ensure that sandbox 673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4 in task-service has been cleanup successfully" Aug 13 08:42:11.970772 containerd[1817]: time="2025-08-13T08:42:11.970723647Z" level=error msg="StopPodSandbox for \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\" failed" error="failed to destroy network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.970967 containerd[1817]: time="2025-08-13T08:42:11.970937519Z" level=error msg="StopPodSandbox for \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\" failed" error="failed to destroy network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.971041 kubelet[3075]: E0813 08:42:11.970958 3075 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:11.971100 kubelet[3075]: E0813 08:42:11.971026 3075 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2"} Aug 13 08:42:11.971144 kubelet[3075]: E0813 08:42:11.971093 3075 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4fc3133-4463-408d-975f-642106e09eac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 08:42:11.971144 kubelet[3075]: E0813 08:42:11.971121 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4fc3133-4463-408d-975f-642106e09eac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6468cf5bfc-jkn9n" podUID="e4fc3133-4463-408d-975f-642106e09eac" Aug 13 08:42:11.971335 kubelet[3075]: E0813 08:42:11.971259 3075 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:11.971335 kubelet[3075]: E0813 08:42:11.971286 3075 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21"} Aug 13 08:42:11.971335 kubelet[3075]: E0813 08:42:11.971314 3075 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 08:42:11.971488 containerd[1817]: time="2025-08-13T08:42:11.971134010Z" level=error msg="StopPodSandbox for \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\" failed" error="failed to destroy network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.971521 kubelet[3075]: E0813 08:42:11.971336 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-658b4bf6b-zt564" podUID="5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7" Aug 13 08:42:11.971521 kubelet[3075]: E0813 08:42:11.971367 3075 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:11.971521 kubelet[3075]: E0813 08:42:11.971389 3075 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9"} Aug 13 08:42:11.971521 kubelet[3075]: E0813 08:42:11.971414 3075 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b313c34-ec50-4ab0-8732-819433f220ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 08:42:11.971645 kubelet[3075]: E0813 08:42:11.971438 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b313c34-ec50-4ab0-8732-819433f220ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bqv6d" podUID="1b313c34-ec50-4ab0-8732-819433f220ba" Aug 13 08:42:11.972817 containerd[1817]: time="2025-08-13T08:42:11.972780855Z" level=error msg="StopPodSandbox for \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\" failed" error="failed to destroy network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.972921 kubelet[3075]: E0813 08:42:11.972901 3075 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:11.972972 kubelet[3075]: E0813 08:42:11.972926 3075 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde"} Aug 13 08:42:11.972972 kubelet[3075]: E0813 08:42:11.972948 3075 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcdd4219-dd3a-4d45-91ce-fcd49d31398a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 08:42:11.973058 kubelet[3075]: E0813 08:42:11.972969 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcdd4219-dd3a-4d45-91ce-fcd49d31398a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-fddzn" podUID="bcdd4219-dd3a-4d45-91ce-fcd49d31398a" Aug 13 08:42:11.973606 containerd[1817]: time="2025-08-13T08:42:11.973586391Z" level=error msg="StopPodSandbox for \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\" failed" error="failed to destroy network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.973722 kubelet[3075]: E0813 08:42:11.973702 3075 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:11.973763 kubelet[3075]: E0813 08:42:11.973728 3075 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a"} Aug 13 08:42:11.973763 kubelet[3075]: E0813 08:42:11.973750 3075 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ca82ef8-ce0c-4a00-987e-b04a91763a60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 08:42:11.973822 kubelet[3075]: E0813 08:42:11.973765 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ca82ef8-ce0c-4a00-987e-b04a91763a60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wdzjv" podUID="9ca82ef8-ce0c-4a00-987e-b04a91763a60" Aug 13 08:42:11.974259 containerd[1817]: time="2025-08-13T08:42:11.974203397Z" level=error msg="StopPodSandbox for \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\" failed" error="failed to destroy network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.974320 kubelet[3075]: E0813 08:42:11.974301 3075 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:11.974384 kubelet[3075]: E0813 08:42:11.974322 3075 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4"} Aug 13 08:42:11.974384 kubelet[3075]: E0813 08:42:11.974342 3075 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d73c1f7-7407-44b3-82c7-2812b426db1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 08:42:11.974384 kubelet[3075]: E0813 08:42:11.974355 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d73c1f7-7407-44b3-82c7-2812b426db1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-658b4bf6b-bvb5g" podUID="1d73c1f7-7407-44b3-82c7-2812b426db1e" Aug 13 08:42:11.975082 containerd[1817]: time="2025-08-13T08:42:11.975061610Z" level=error msg="StopPodSandbox for \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\" failed" error="failed to destroy network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:11.975164 kubelet[3075]: E0813 08:42:11.975148 3075 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:11.975208 kubelet[3075]: E0813 08:42:11.975169 3075 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841"} Aug 13 08:42:11.975208 kubelet[3075]: E0813 08:42:11.975194 3075 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7945323e-4111-40cf-841e-fe297ec96877\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 08:42:11.975275 kubelet[3075]: E0813 08:42:11.975208 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7945323e-4111-40cf-841e-fe297ec96877\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69cffdcfc8-5ltj4" podUID="7945323e-4111-40cf-841e-fe297ec96877" Aug 13 08:42:12.482360 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2-shm.mount: Deactivated successfully. Aug 13 08:42:12.482419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9-shm.mount: Deactivated successfully. Aug 13 08:42:12.862128 systemd[1]: Created slice kubepods-besteffort-pod5be6b7a1_9903_4a08_a181_3b1833ebdcac.slice - libcontainer container kubepods-besteffort-pod5be6b7a1_9903_4a08_a181_3b1833ebdcac.slice. Aug 13 08:42:12.867664 containerd[1817]: time="2025-08-13T08:42:12.867576930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rjlh,Uid:5be6b7a1-9903-4a08-a181-3b1833ebdcac,Namespace:calico-system,Attempt:0,}" Aug 13 08:42:12.896349 containerd[1817]: time="2025-08-13T08:42:12.896295364Z" level=error msg="Failed to destroy network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:12.896522 containerd[1817]: time="2025-08-13T08:42:12.896478498Z" level=error msg="encountered an error cleaning up failed sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:12.896522 containerd[1817]: time="2025-08-13T08:42:12.896509656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rjlh,Uid:5be6b7a1-9903-4a08-a181-3b1833ebdcac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:12.896685 kubelet[3075]: E0813 08:42:12.896636 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:12.896685 kubelet[3075]: E0813 08:42:12.896680 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9rjlh" Aug 13 08:42:12.896736 kubelet[3075]: E0813 08:42:12.896694 3075 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9rjlh" Aug 13 08:42:12.896736 kubelet[3075]: E0813 08:42:12.896719 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9rjlh_calico-system(5be6b7a1-9903-4a08-a181-3b1833ebdcac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9rjlh_calico-system(5be6b7a1-9903-4a08-a181-3b1833ebdcac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9rjlh" podUID="5be6b7a1-9903-4a08-a181-3b1833ebdcac" Aug 13 08:42:12.897803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f-shm.mount: Deactivated successfully. Aug 13 08:42:12.958441 kubelet[3075]: I0813 08:42:12.958423 3075 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:12.958715 containerd[1817]: time="2025-08-13T08:42:12.958700966Z" level=info msg="StopPodSandbox for \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\"" Aug 13 08:42:12.958802 containerd[1817]: time="2025-08-13T08:42:12.958789769Z" level=info msg="Ensure that sandbox f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f in task-service has been cleanup successfully" Aug 13 08:42:12.971914 containerd[1817]: time="2025-08-13T08:42:12.971880517Z" level=error msg="StopPodSandbox for \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\" failed" error="failed to destroy network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 08:42:12.972090 kubelet[3075]: E0813 08:42:12.972063 3075 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:12.972134 kubelet[3075]: E0813 08:42:12.972102 3075 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f"} Aug 13 08:42:12.972134 kubelet[3075]: E0813 08:42:12.972129 3075 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5be6b7a1-9903-4a08-a181-3b1833ebdcac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 08:42:12.972208 kubelet[3075]: E0813 08:42:12.972145 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5be6b7a1-9903-4a08-a181-3b1833ebdcac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9rjlh" podUID="5be6b7a1-9903-4a08-a181-3b1833ebdcac" Aug 13 08:42:17.430740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271607671.mount: Deactivated successfully. Aug 13 08:42:17.458923 containerd[1817]: time="2025-08-13T08:42:17.458897352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:17.459146 containerd[1817]: time="2025-08-13T08:42:17.459110448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 08:42:17.459431 containerd[1817]: time="2025-08-13T08:42:17.459419275Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:17.460271 containerd[1817]: time="2025-08-13T08:42:17.460260479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:17.460965 containerd[1817]: time="2025-08-13T08:42:17.460923176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.507237636s" Aug 13 08:42:17.460965 containerd[1817]: time="2025-08-13T08:42:17.460937465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 08:42:17.464359 containerd[1817]: time="2025-08-13T08:42:17.464316803Z" level=info msg="CreateContainer within sandbox \"6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 08:42:17.470223 containerd[1817]: time="2025-08-13T08:42:17.470201583Z" level=info msg="CreateContainer within sandbox \"6463d6109281c65bfadd7bb91800f9474bc8326c02cef9f597b7021903cd0440\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8ec4b8819d5a405e40f1cccb9dff3949918391573eaffa3e331b5ef037027cbf\"" Aug 13 08:42:17.470498 containerd[1817]: time="2025-08-13T08:42:17.470482677Z" level=info msg="StartContainer for \"8ec4b8819d5a405e40f1cccb9dff3949918391573eaffa3e331b5ef037027cbf\"" Aug 13 08:42:17.493478 systemd[1]: Started cri-containerd-8ec4b8819d5a405e40f1cccb9dff3949918391573eaffa3e331b5ef037027cbf.scope - libcontainer container 8ec4b8819d5a405e40f1cccb9dff3949918391573eaffa3e331b5ef037027cbf. Aug 13 08:42:17.507777 containerd[1817]: time="2025-08-13T08:42:17.507723459Z" level=info msg="StartContainer for \"8ec4b8819d5a405e40f1cccb9dff3949918391573eaffa3e331b5ef037027cbf\" returns successfully" Aug 13 08:42:17.575192 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 08:42:17.575247 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 08:42:17.613090 containerd[1817]: time="2025-08-13T08:42:17.613062747Z" level=info msg="StopPodSandbox for \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\"" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.637 [INFO][4670] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.637 [INFO][4670] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" iface="eth0" netns="/var/run/netns/cni-391e6725-f4e1-530a-846e-82162ed5ddb0" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.637 [INFO][4670] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" iface="eth0" netns="/var/run/netns/cni-391e6725-f4e1-530a-846e-82162ed5ddb0" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.638 [INFO][4670] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" iface="eth0" netns="/var/run/netns/cni-391e6725-f4e1-530a-846e-82162ed5ddb0" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.638 [INFO][4670] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.638 [INFO][4670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.648 [INFO][4698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.648 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.648 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.651 [WARNING][4698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.651 [INFO][4698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.651 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:17.654243 containerd[1817]: 2025-08-13 08:42:17.653 [INFO][4670] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:17.654543 containerd[1817]: time="2025-08-13T08:42:17.654320881Z" level=info msg="TearDown network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\" successfully" Aug 13 08:42:17.654543 containerd[1817]: time="2025-08-13T08:42:17.654340253Z" level=info msg="StopPodSandbox for \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\" returns successfully" Aug 13 08:42:17.678508 kubelet[3075]: I0813 08:42:17.678488 3075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7945323e-4111-40cf-841e-fe297ec96877-whisker-backend-key-pair\") pod \"7945323e-4111-40cf-841e-fe297ec96877\" (UID: \"7945323e-4111-40cf-841e-fe297ec96877\") " Aug 13 08:42:17.678916 kubelet[3075]: I0813 08:42:17.678529 3075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbg4z\" (UniqueName: \"kubernetes.io/projected/7945323e-4111-40cf-841e-fe297ec96877-kube-api-access-wbg4z\") pod \"7945323e-4111-40cf-841e-fe297ec96877\" (UID: \"7945323e-4111-40cf-841e-fe297ec96877\") " Aug 13 08:42:17.678916 kubelet[3075]: I0813 08:42:17.678555 3075 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7945323e-4111-40cf-841e-fe297ec96877-whisker-ca-bundle\") pod \"7945323e-4111-40cf-841e-fe297ec96877\" (UID: \"7945323e-4111-40cf-841e-fe297ec96877\") " Aug 13 08:42:17.678916 kubelet[3075]: I0813 08:42:17.678778 3075 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7945323e-4111-40cf-841e-fe297ec96877-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7945323e-4111-40cf-841e-fe297ec96877" (UID: "7945323e-4111-40cf-841e-fe297ec96877"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 08:42:17.679956 kubelet[3075]: I0813 08:42:17.679939 3075 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7945323e-4111-40cf-841e-fe297ec96877-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7945323e-4111-40cf-841e-fe297ec96877" (UID: "7945323e-4111-40cf-841e-fe297ec96877"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 08:42:17.679956 kubelet[3075]: I0813 08:42:17.679945 3075 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7945323e-4111-40cf-841e-fe297ec96877-kube-api-access-wbg4z" (OuterVolumeSpecName: "kube-api-access-wbg4z") pod "7945323e-4111-40cf-841e-fe297ec96877" (UID: "7945323e-4111-40cf-841e-fe297ec96877"). InnerVolumeSpecName "kube-api-access-wbg4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 08:42:17.780226 kubelet[3075]: I0813 08:42:17.779973 3075 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbg4z\" (UniqueName: \"kubernetes.io/projected/7945323e-4111-40cf-841e-fe297ec96877-kube-api-access-wbg4z\") on node \"ci-4081.3.5-a-711ae8cc9f\" DevicePath \"\"" Aug 13 08:42:17.780226 kubelet[3075]: I0813 08:42:17.780046 3075 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7945323e-4111-40cf-841e-fe297ec96877-whisker-ca-bundle\") on node \"ci-4081.3.5-a-711ae8cc9f\" DevicePath \"\"" Aug 13 08:42:17.780226 kubelet[3075]: I0813 08:42:17.780082 3075 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7945323e-4111-40cf-841e-fe297ec96877-whisker-backend-key-pair\") on node \"ci-4081.3.5-a-711ae8cc9f\" DevicePath \"\"" Aug 13 08:42:17.863388 systemd[1]: Removed slice kubepods-besteffort-pod7945323e_4111_40cf_841e_fe297ec96877.slice - libcontainer container kubepods-besteffort-pod7945323e_4111_40cf_841e_fe297ec96877.slice. Aug 13 08:42:18.010272 kubelet[3075]: I0813 08:42:18.010144 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9mz27" podStartSLOduration=1.320757502 podStartE2EDuration="17.010107611s" podCreationTimestamp="2025-08-13 08:42:01 +0000 UTC" firstStartedPulling="2025-08-13 08:42:01.771932167 +0000 UTC m=+15.967173151" lastFinishedPulling="2025-08-13 08:42:17.461282272 +0000 UTC m=+31.656523260" observedRunningTime="2025-08-13 08:42:18.00907884 +0000 UTC m=+32.204319916" watchObservedRunningTime="2025-08-13 08:42:18.010107611 +0000 UTC m=+32.205348641" Aug 13 08:42:18.064558 systemd[1]: Created slice kubepods-besteffort-pod652c3b41_344d_48fb_966b_860b2be80bc4.slice - libcontainer container kubepods-besteffort-pod652c3b41_344d_48fb_966b_860b2be80bc4.slice. Aug 13 08:42:18.083029 kubelet[3075]: I0813 08:42:18.082976 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbhj\" (UniqueName: \"kubernetes.io/projected/652c3b41-344d-48fb-966b-860b2be80bc4-kube-api-access-jcbhj\") pod \"whisker-7bcf678546-zf5vd\" (UID: \"652c3b41-344d-48fb-966b-860b2be80bc4\") " pod="calico-system/whisker-7bcf678546-zf5vd" Aug 13 08:42:18.083236 kubelet[3075]: I0813 08:42:18.083135 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652c3b41-344d-48fb-966b-860b2be80bc4-whisker-ca-bundle\") pod \"whisker-7bcf678546-zf5vd\" (UID: \"652c3b41-344d-48fb-966b-860b2be80bc4\") " pod="calico-system/whisker-7bcf678546-zf5vd" Aug 13 08:42:18.083330 kubelet[3075]: I0813 08:42:18.083277 3075 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/652c3b41-344d-48fb-966b-860b2be80bc4-whisker-backend-key-pair\") pod \"whisker-7bcf678546-zf5vd\" (UID: \"652c3b41-344d-48fb-966b-860b2be80bc4\") " pod="calico-system/whisker-7bcf678546-zf5vd" Aug 13 08:42:18.369881 containerd[1817]: time="2025-08-13T08:42:18.369734087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcf678546-zf5vd,Uid:652c3b41-344d-48fb-966b-860b2be80bc4,Namespace:calico-system,Attempt:0,}" Aug 13 08:42:18.429904 systemd-networkd[1619]: cali436dbef8dcf: Link UP Aug 13 08:42:18.430084 systemd-networkd[1619]: cali436dbef8dcf: Gained carrier Aug 13 08:42:18.433506 systemd[1]: run-netns-cni\x2d391e6725\x2df4e1\x2d530a\x2d846e\x2d82162ed5ddb0.mount: Deactivated successfully. Aug 13 08:42:18.433558 systemd[1]: var-lib-kubelet-pods-7945323e\x2d4111\x2d40cf\x2d841e\x2dfe297ec96877-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwbg4z.mount: Deactivated successfully. Aug 13 08:42:18.433598 systemd[1]: var-lib-kubelet-pods-7945323e\x2d4111\x2d40cf\x2d841e\x2dfe297ec96877-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.385 [INFO][4729] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.391 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0 whisker-7bcf678546- calico-system 652c3b41-344d-48fb-966b-860b2be80bc4 900 0 2025-08-13 08:42:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bcf678546 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-a-711ae8cc9f whisker-7bcf678546-zf5vd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali436dbef8dcf [] [] }} ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Namespace="calico-system" Pod="whisker-7bcf678546-zf5vd" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.392 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Namespace="calico-system" Pod="whisker-7bcf678546-zf5vd" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.404 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" HandleID="k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.405 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" HandleID="k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043bc60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-711ae8cc9f", "pod":"whisker-7bcf678546-zf5vd", "timestamp":"2025-08-13 08:42:18.404970763 +0000 UTC"}, Hostname:"ci-4081.3.5-a-711ae8cc9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.405 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.405 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.405 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-711ae8cc9f' Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.409 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.413 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.416 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.417 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.419 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.419 [INFO][4751] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.420 [INFO][4751] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118 Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.422 [INFO][4751] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.424 [INFO][4751] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.129/26] block=192.168.79.128/26 handle="k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.424 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.129/26] handle="k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.424 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:18.435700 containerd[1817]: 2025-08-13 08:42:18.424 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.129/26] IPv6=[] ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" HandleID="k8s-pod-network.6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" Aug 13 08:42:18.436118 containerd[1817]: 2025-08-13 08:42:18.425 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Namespace="calico-system" Pod="whisker-7bcf678546-zf5vd" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0", GenerateName:"whisker-7bcf678546-", Namespace:"calico-system", SelfLink:"", UID:"652c3b41-344d-48fb-966b-860b2be80bc4", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bcf678546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"", Pod:"whisker-7bcf678546-zf5vd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali436dbef8dcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:18.436118 containerd[1817]: 2025-08-13 08:42:18.426 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.129/32] ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Namespace="calico-system" Pod="whisker-7bcf678546-zf5vd" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" Aug 13 08:42:18.436118 containerd[1817]: 2025-08-13 08:42:18.426 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali436dbef8dcf ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Namespace="calico-system" Pod="whisker-7bcf678546-zf5vd" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" Aug 13 08:42:18.436118 containerd[1817]: 2025-08-13 08:42:18.430 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Namespace="calico-system" Pod="whisker-7bcf678546-zf5vd" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" Aug 13 08:42:18.436118 containerd[1817]: 2025-08-13 08:42:18.430 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Namespace="calico-system" Pod="whisker-7bcf678546-zf5vd" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0", GenerateName:"whisker-7bcf678546-", Namespace:"calico-system", SelfLink:"", UID:"652c3b41-344d-48fb-966b-860b2be80bc4", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bcf678546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118", Pod:"whisker-7bcf678546-zf5vd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali436dbef8dcf", MAC:"8a:37:2b:62:48:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:18.436118 containerd[1817]: 2025-08-13 08:42:18.434 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118" Namespace="calico-system" Pod="whisker-7bcf678546-zf5vd" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--7bcf678546--zf5vd-eth0" Aug 13 08:42:18.444122 containerd[1817]: time="2025-08-13T08:42:18.444067405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:18.444122 containerd[1817]: time="2025-08-13T08:42:18.444114892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:18.444122 containerd[1817]: time="2025-08-13T08:42:18.444124649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:18.444298 containerd[1817]: time="2025-08-13T08:42:18.444163816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:18.466355 systemd[1]: Started cri-containerd-6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118.scope - libcontainer container 6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118. Aug 13 08:42:18.494280 containerd[1817]: time="2025-08-13T08:42:18.494254766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcf678546-zf5vd,Uid:652c3b41-344d-48fb-966b-860b2be80bc4,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118\"" Aug 13 08:42:18.495123 containerd[1817]: time="2025-08-13T08:42:18.495105714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 08:42:18.759246 kernel: bpftool[4965]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 08:42:18.914864 systemd-networkd[1619]: vxlan.calico: Link UP Aug 13 08:42:18.914868 systemd-networkd[1619]: vxlan.calico: Gained carrier Aug 13 08:42:19.542488 systemd-networkd[1619]: cali436dbef8dcf: Gained IPv6LL Aug 13 08:42:19.849605 kubelet[3075]: I0813 08:42:19.849583 3075 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7945323e-4111-40cf-841e-fe297ec96877" path="/var/lib/kubelet/pods/7945323e-4111-40cf-841e-fe297ec96877/volumes" Aug 13 08:42:20.199458 containerd[1817]: time="2025-08-13T08:42:20.199372254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:20.199677 containerd[1817]: time="2025-08-13T08:42:20.199538437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 08:42:20.199932 containerd[1817]: time="2025-08-13T08:42:20.199895635Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:20.201052 containerd[1817]: time="2025-08-13T08:42:20.201011707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:20.201469 containerd[1817]: time="2025-08-13T08:42:20.201456797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.70632842s" Aug 13 08:42:20.201496 containerd[1817]: time="2025-08-13T08:42:20.201474040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 08:42:20.202486 containerd[1817]: time="2025-08-13T08:42:20.202473361Z" level=info msg="CreateContainer within sandbox \"6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 08:42:20.206477 containerd[1817]: time="2025-08-13T08:42:20.206439279Z" level=info msg="CreateContainer within sandbox \"6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c59a909b4718551941b89070588a3c84fc6bc73656e65936952c5211f61d271d\"" Aug 13 08:42:20.206681 containerd[1817]: time="2025-08-13T08:42:20.206668097Z" level=info msg="StartContainer for \"c59a909b4718551941b89070588a3c84fc6bc73656e65936952c5211f61d271d\"" Aug 13 08:42:20.236453 systemd[1]: Started cri-containerd-c59a909b4718551941b89070588a3c84fc6bc73656e65936952c5211f61d271d.scope - libcontainer container c59a909b4718551941b89070588a3c84fc6bc73656e65936952c5211f61d271d. Aug 13 08:42:20.263691 containerd[1817]: time="2025-08-13T08:42:20.263664826Z" level=info msg="StartContainer for \"c59a909b4718551941b89070588a3c84fc6bc73656e65936952c5211f61d271d\" returns successfully" Aug 13 08:42:20.264319 containerd[1817]: time="2025-08-13T08:42:20.264303609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 08:42:20.630504 systemd-networkd[1619]: vxlan.calico: Gained IPv6LL Aug 13 08:42:22.659163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708192782.mount: Deactivated successfully. Aug 13 08:42:22.663569 containerd[1817]: time="2025-08-13T08:42:22.663527006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:22.663730 containerd[1817]: time="2025-08-13T08:42:22.663644877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 08:42:22.663964 containerd[1817]: time="2025-08-13T08:42:22.663952640Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:22.665082 containerd[1817]: time="2025-08-13T08:42:22.665069634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:22.665545 containerd[1817]: time="2025-08-13T08:42:22.665524055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.401197447s" Aug 13 08:42:22.665545 containerd[1817]: time="2025-08-13T08:42:22.665541126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 08:42:22.666615 containerd[1817]: time="2025-08-13T08:42:22.666572790Z" level=info msg="CreateContainer within sandbox \"6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 08:42:22.670351 containerd[1817]: time="2025-08-13T08:42:22.670308499Z" level=info msg="CreateContainer within sandbox \"6d1903a1b108b62a534f26fd144d62174b8c47e38ca634a58c3a538d56537118\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ab1b5b8ad0b79fb721cc57bb9f14a4e7a54003c750a4a82266ccde7b07143608\"" Aug 13 08:42:22.670562 containerd[1817]: time="2025-08-13T08:42:22.670549701Z" level=info msg="StartContainer for \"ab1b5b8ad0b79fb721cc57bb9f14a4e7a54003c750a4a82266ccde7b07143608\"" Aug 13 08:42:22.700470 systemd[1]: Started cri-containerd-ab1b5b8ad0b79fb721cc57bb9f14a4e7a54003c750a4a82266ccde7b07143608.scope - libcontainer container ab1b5b8ad0b79fb721cc57bb9f14a4e7a54003c750a4a82266ccde7b07143608. Aug 13 08:42:22.727600 containerd[1817]: time="2025-08-13T08:42:22.727576032Z" level=info msg="StartContainer for \"ab1b5b8ad0b79fb721cc57bb9f14a4e7a54003c750a4a82266ccde7b07143608\" returns successfully" Aug 13 08:42:22.848432 containerd[1817]: time="2025-08-13T08:42:22.848372733Z" level=info msg="StopPodSandbox for \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\"" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.889 [INFO][5260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.889 [INFO][5260] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" iface="eth0" netns="/var/run/netns/cni-563d0e6e-7e26-6ef7-14e8-f1667b7a3db5" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.889 [INFO][5260] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" iface="eth0" netns="/var/run/netns/cni-563d0e6e-7e26-6ef7-14e8-f1667b7a3db5" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.889 [INFO][5260] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" iface="eth0" netns="/var/run/netns/cni-563d0e6e-7e26-6ef7-14e8-f1667b7a3db5" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.889 [INFO][5260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.889 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.912 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.912 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.912 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.920 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.920 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.921 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:22.924991 containerd[1817]: 2025-08-13 08:42:22.923 [INFO][5260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:22.925668 containerd[1817]: time="2025-08-13T08:42:22.925066830Z" level=info msg="TearDown network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\" successfully" Aug 13 08:42:22.925668 containerd[1817]: time="2025-08-13T08:42:22.925097114Z" level=info msg="StopPodSandbox for \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\" returns successfully" Aug 13 08:42:22.925753 containerd[1817]: time="2025-08-13T08:42:22.925720013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fddzn,Uid:bcdd4219-dd3a-4d45-91ce-fcd49d31398a,Namespace:calico-system,Attempt:1,}" Aug 13 08:42:23.012160 kubelet[3075]: I0813 08:42:23.011994 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bcf678546-zf5vd" podStartSLOduration=0.84092746 podStartE2EDuration="5.011940538s" podCreationTimestamp="2025-08-13 08:42:18 +0000 UTC" firstStartedPulling="2025-08-13 08:42:18.494942886 +0000 UTC m=+32.690183874" lastFinishedPulling="2025-08-13 08:42:22.665955971 +0000 UTC m=+36.861196952" observedRunningTime="2025-08-13 08:42:23.010930694 +0000 UTC m=+37.206171789" watchObservedRunningTime="2025-08-13 08:42:23.011940538 +0000 UTC m=+37.207181580" Aug 13 08:42:23.023728 systemd-networkd[1619]: cali8fe1bfa28a1: Link UP Aug 13 08:42:23.024664 systemd-networkd[1619]: cali8fe1bfa28a1: Gained carrier Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.945 [INFO][5297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0 goldmane-768f4c5c69- calico-system bcdd4219-dd3a-4d45-91ce-fcd49d31398a 924 0 2025-08-13 08:42:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-a-711ae8cc9f goldmane-768f4c5c69-fddzn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8fe1bfa28a1 [] [] }} ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Namespace="calico-system" Pod="goldmane-768f4c5c69-fddzn" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.945 [INFO][5297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Namespace="calico-system" Pod="goldmane-768f4c5c69-fddzn" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.958 [INFO][5319] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" HandleID="k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.958 [INFO][5319] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" HandleID="k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-711ae8cc9f", "pod":"goldmane-768f4c5c69-fddzn", "timestamp":"2025-08-13 08:42:22.958395748 +0000 UTC"}, Hostname:"ci-4081.3.5-a-711ae8cc9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.958 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.958 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.958 [INFO][5319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-711ae8cc9f' Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.963 [INFO][5319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.974 [INFO][5319] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.983 [INFO][5319] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.987 [INFO][5319] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.992 [INFO][5319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.992 [INFO][5319] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:22.995 [INFO][5319] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3 Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:23.002 [INFO][5319] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:23.013 [INFO][5319] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.130/26] block=192.168.79.128/26 handle="k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:23.013 [INFO][5319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.130/26] handle="k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:23.013 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:23.045324 containerd[1817]: 2025-08-13 08:42:23.013 [INFO][5319] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.130/26] IPv6=[] ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" HandleID="k8s-pod-network.75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:23.046129 containerd[1817]: 2025-08-13 08:42:23.019 [INFO][5297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Namespace="calico-system" Pod="goldmane-768f4c5c69-fddzn" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bcdd4219-dd3a-4d45-91ce-fcd49d31398a", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"", Pod:"goldmane-768f4c5c69-fddzn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8fe1bfa28a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:23.046129 containerd[1817]: 2025-08-13 08:42:23.019 [INFO][5297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.130/32] ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Namespace="calico-system" Pod="goldmane-768f4c5c69-fddzn" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:23.046129 containerd[1817]: 2025-08-13 08:42:23.019 [INFO][5297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fe1bfa28a1 ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Namespace="calico-system" Pod="goldmane-768f4c5c69-fddzn" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:23.046129 containerd[1817]: 2025-08-13 08:42:23.024 [INFO][5297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Namespace="calico-system" Pod="goldmane-768f4c5c69-fddzn" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:23.046129 containerd[1817]: 2025-08-13 08:42:23.025 [INFO][5297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Namespace="calico-system" Pod="goldmane-768f4c5c69-fddzn" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bcdd4219-dd3a-4d45-91ce-fcd49d31398a", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3", Pod:"goldmane-768f4c5c69-fddzn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8fe1bfa28a1", MAC:"8e:18:b3:b6:23:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:23.046129 containerd[1817]: 2025-08-13 08:42:23.043 [INFO][5297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3" Namespace="calico-system" Pod="goldmane-768f4c5c69-fddzn" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:23.054559 containerd[1817]: time="2025-08-13T08:42:23.054493577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:23.054559 containerd[1817]: time="2025-08-13T08:42:23.054524467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:23.054559 containerd[1817]: time="2025-08-13T08:42:23.054534379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:23.054671 containerd[1817]: time="2025-08-13T08:42:23.054576072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:23.082459 systemd[1]: Started cri-containerd-75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3.scope - libcontainer container 75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3. Aug 13 08:42:23.105146 containerd[1817]: time="2025-08-13T08:42:23.105124800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fddzn,Uid:bcdd4219-dd3a-4d45-91ce-fcd49d31398a,Namespace:calico-system,Attempt:1,} returns sandbox id \"75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3\"" Aug 13 08:42:23.105853 containerd[1817]: time="2025-08-13T08:42:23.105841831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 08:42:23.469440 systemd[1]: run-netns-cni\x2d563d0e6e\x2d7e26\x2d6ef7\x2d14e8\x2df1667b7a3db5.mount: Deactivated successfully. Aug 13 08:42:23.849238 containerd[1817]: time="2025-08-13T08:42:23.849126186Z" level=info msg="StopPodSandbox for \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\"" Aug 13 08:42:23.850120 containerd[1817]: time="2025-08-13T08:42:23.849125406Z" level=info msg="StopPodSandbox for \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\"" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.914 [INFO][5411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.914 [INFO][5411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" iface="eth0" netns="/var/run/netns/cni-a34ca87a-02f1-4c6a-985e-c42072018246" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.914 [INFO][5411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" iface="eth0" netns="/var/run/netns/cni-a34ca87a-02f1-4c6a-985e-c42072018246" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.915 [INFO][5411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" iface="eth0" netns="/var/run/netns/cni-a34ca87a-02f1-4c6a-985e-c42072018246" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.915 [INFO][5411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.915 [INFO][5411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.949 [INFO][5441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.949 [INFO][5441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.949 [INFO][5441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.956 [WARNING][5441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.956 [INFO][5441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.957 [INFO][5441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:23.959377 containerd[1817]: 2025-08-13 08:42:23.958 [INFO][5411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:23.959796 containerd[1817]: time="2025-08-13T08:42:23.959482726Z" level=info msg="TearDown network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\" successfully" Aug 13 08:42:23.959796 containerd[1817]: time="2025-08-13T08:42:23.959511586Z" level=info msg="StopPodSandbox for \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\" returns successfully" Aug 13 08:42:23.960004 containerd[1817]: time="2025-08-13T08:42:23.959985967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6468cf5bfc-jkn9n,Uid:e4fc3133-4463-408d-975f-642106e09eac,Namespace:calico-system,Attempt:1,}" Aug 13 08:42:23.961326 systemd[1]: run-netns-cni\x2da34ca87a\x2d02f1\x2d4c6a\x2d985e\x2dc42072018246.mount: Deactivated successfully. Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.920 [INFO][5410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.920 [INFO][5410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" iface="eth0" netns="/var/run/netns/cni-969015fa-c5f7-b68a-455f-d40a1cbc9a54" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.921 [INFO][5410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" iface="eth0" netns="/var/run/netns/cni-969015fa-c5f7-b68a-455f-d40a1cbc9a54" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.921 [INFO][5410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" iface="eth0" netns="/var/run/netns/cni-969015fa-c5f7-b68a-455f-d40a1cbc9a54" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.921 [INFO][5410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.921 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.951 [INFO][5447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.951 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.957 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.962 [WARNING][5447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.962 [INFO][5447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.963 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:23.964646 containerd[1817]: 2025-08-13 08:42:23.963 [INFO][5410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:23.964954 containerd[1817]: time="2025-08-13T08:42:23.964673662Z" level=info msg="TearDown network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\" successfully" Aug 13 08:42:23.964954 containerd[1817]: time="2025-08-13T08:42:23.964687309Z" level=info msg="StopPodSandbox for \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\" returns successfully" Aug 13 08:42:23.965062 containerd[1817]: time="2025-08-13T08:42:23.965050640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rjlh,Uid:5be6b7a1-9903-4a08-a181-3b1833ebdcac,Namespace:calico-system,Attempt:1,}" Aug 13 08:42:23.967443 systemd[1]: run-netns-cni\x2d969015fa\x2dc5f7\x2db68a\x2d455f\x2dd40a1cbc9a54.mount: Deactivated successfully. Aug 13 08:42:24.010690 systemd-networkd[1619]: cali0dd7a301021: Link UP Aug 13 08:42:24.010836 systemd-networkd[1619]: cali0dd7a301021: Gained carrier Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:23.981 [INFO][5477] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0 calico-kube-controllers-6468cf5bfc- calico-system e4fc3133-4463-408d-975f-642106e09eac 938 0 2025-08-13 08:42:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6468cf5bfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-a-711ae8cc9f calico-kube-controllers-6468cf5bfc-jkn9n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0dd7a301021 [] [] }} ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Namespace="calico-system" Pod="calico-kube-controllers-6468cf5bfc-jkn9n" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:23.981 [INFO][5477] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Namespace="calico-system" Pod="calico-kube-controllers-6468cf5bfc-jkn9n" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:23.994 [INFO][5503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" HandleID="k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:23.994 [INFO][5503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" HandleID="k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-711ae8cc9f", "pod":"calico-kube-controllers-6468cf5bfc-jkn9n", "timestamp":"2025-08-13 08:42:23.994707036 +0000 UTC"}, Hostname:"ci-4081.3.5-a-711ae8cc9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:23.994 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:23.994 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:23.994 [INFO][5503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-711ae8cc9f' Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:23.998 [INFO][5503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.000 [INFO][5503] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.002 [INFO][5503] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.003 [INFO][5503] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.003 [INFO][5503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.004 [INFO][5503] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.004 [INFO][5503] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429 Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.006 [INFO][5503] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.008 [INFO][5503] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.131/26] block=192.168.79.128/26 handle="k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.008 [INFO][5503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.131/26] handle="k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.009 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:24.016370 containerd[1817]: 2025-08-13 08:42:24.009 [INFO][5503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.131/26] IPv6=[] ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" HandleID="k8s-pod-network.ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:24.016825 containerd[1817]: 2025-08-13 08:42:24.009 [INFO][5477] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Namespace="calico-system" Pod="calico-kube-controllers-6468cf5bfc-jkn9n" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0", GenerateName:"calico-kube-controllers-6468cf5bfc-", Namespace:"calico-system", SelfLink:"", UID:"e4fc3133-4463-408d-975f-642106e09eac", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6468cf5bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"", Pod:"calico-kube-controllers-6468cf5bfc-jkn9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0dd7a301021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:24.016825 containerd[1817]: 2025-08-13 08:42:24.010 [INFO][5477] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.131/32] ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Namespace="calico-system" Pod="calico-kube-controllers-6468cf5bfc-jkn9n" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:24.016825 containerd[1817]: 2025-08-13 08:42:24.010 [INFO][5477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dd7a301021 ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Namespace="calico-system" Pod="calico-kube-controllers-6468cf5bfc-jkn9n" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:24.016825 containerd[1817]: 2025-08-13 08:42:24.010 [INFO][5477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Namespace="calico-system" Pod="calico-kube-controllers-6468cf5bfc-jkn9n" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:24.016825 containerd[1817]: 2025-08-13 08:42:24.011 [INFO][5477] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Namespace="calico-system" Pod="calico-kube-controllers-6468cf5bfc-jkn9n" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0", GenerateName:"calico-kube-controllers-6468cf5bfc-", Namespace:"calico-system", SelfLink:"", UID:"e4fc3133-4463-408d-975f-642106e09eac", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6468cf5bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429", Pod:"calico-kube-controllers-6468cf5bfc-jkn9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0dd7a301021", MAC:"3e:70:39:32:8a:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:24.016825 containerd[1817]: 2025-08-13 08:42:24.015 [INFO][5477] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429" Namespace="calico-system" Pod="calico-kube-controllers-6468cf5bfc-jkn9n" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:24.024772 containerd[1817]: time="2025-08-13T08:42:24.024546596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:24.024772 containerd[1817]: time="2025-08-13T08:42:24.024763533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:24.024772 containerd[1817]: time="2025-08-13T08:42:24.024772060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:24.024881 containerd[1817]: time="2025-08-13T08:42:24.024814015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:24.041487 systemd[1]: Started cri-containerd-ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429.scope - libcontainer container ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429. Aug 13 08:42:24.063328 containerd[1817]: time="2025-08-13T08:42:24.063307961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6468cf5bfc-jkn9n,Uid:e4fc3133-4463-408d-975f-642106e09eac,Namespace:calico-system,Attempt:1,} returns sandbox id \"ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429\"" Aug 13 08:42:24.159605 systemd-networkd[1619]: cali642c1e755e1: Link UP Aug 13 08:42:24.160522 systemd-networkd[1619]: cali642c1e755e1: Gained carrier Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.000 [INFO][5505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0 csi-node-driver- calico-system 5be6b7a1-9903-4a08-a181-3b1833ebdcac 939 0 2025-08-13 08:42:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-a-711ae8cc9f csi-node-driver-9rjlh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali642c1e755e1 [] [] }} ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Namespace="calico-system" Pod="csi-node-driver-9rjlh" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.000 [INFO][5505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Namespace="calico-system" Pod="csi-node-driver-9rjlh" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.012 [INFO][5545] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" HandleID="k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.013 [INFO][5545] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" HandleID="k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-711ae8cc9f", "pod":"csi-node-driver-9rjlh", "timestamp":"2025-08-13 08:42:24.012921435 +0000 UTC"}, Hostname:"ci-4081.3.5-a-711ae8cc9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.013 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.013 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.013 [INFO][5545] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-711ae8cc9f' Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.101 [INFO][5545] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.110 [INFO][5545] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.119 [INFO][5545] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.123 [INFO][5545] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.129 [INFO][5545] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.129 [INFO][5545] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.132 [INFO][5545] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814 Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.139 [INFO][5545] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.150 [INFO][5545] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.132/26] block=192.168.79.128/26 handle="k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.150 [INFO][5545] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.132/26] handle="k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.150 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:24.180953 containerd[1817]: 2025-08-13 08:42:24.150 [INFO][5545] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.132/26] IPv6=[] ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" HandleID="k8s-pod-network.d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:24.181409 containerd[1817]: 2025-08-13 08:42:24.155 [INFO][5505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Namespace="calico-system" Pod="csi-node-driver-9rjlh" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5be6b7a1-9903-4a08-a181-3b1833ebdcac", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"", Pod:"csi-node-driver-9rjlh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali642c1e755e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:24.181409 containerd[1817]: 2025-08-13 08:42:24.155 [INFO][5505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.132/32] ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Namespace="calico-system" Pod="csi-node-driver-9rjlh" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:24.181409 containerd[1817]: 2025-08-13 08:42:24.155 [INFO][5505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali642c1e755e1 ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Namespace="calico-system" Pod="csi-node-driver-9rjlh" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:24.181409 containerd[1817]: 2025-08-13 08:42:24.160 [INFO][5505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Namespace="calico-system" Pod="csi-node-driver-9rjlh" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:24.181409 containerd[1817]: 2025-08-13 08:42:24.161 [INFO][5505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Namespace="calico-system" Pod="csi-node-driver-9rjlh" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5be6b7a1-9903-4a08-a181-3b1833ebdcac", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814", Pod:"csi-node-driver-9rjlh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali642c1e755e1", MAC:"f6:1c:93:57:48:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:24.181409 containerd[1817]: 2025-08-13 08:42:24.180 [INFO][5505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814" Namespace="calico-system" Pod="csi-node-driver-9rjlh" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:24.188950 containerd[1817]: time="2025-08-13T08:42:24.188883189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:24.188950 containerd[1817]: time="2025-08-13T08:42:24.188928273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:24.189135 containerd[1817]: time="2025-08-13T08:42:24.189122352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:24.189190 containerd[1817]: time="2025-08-13T08:42:24.189168490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:24.215706 systemd[1]: Started cri-containerd-d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814.scope - libcontainer container d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814. Aug 13 08:42:24.266056 containerd[1817]: time="2025-08-13T08:42:24.265927638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rjlh,Uid:5be6b7a1-9903-4a08-a181-3b1833ebdcac,Namespace:calico-system,Attempt:1,} returns sandbox id \"d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814\"" Aug 13 08:42:24.848984 containerd[1817]: time="2025-08-13T08:42:24.848939585Z" level=info msg="StopPodSandbox for \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\"" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.877 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.877 [INFO][5675] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" iface="eth0" netns="/var/run/netns/cni-01168f1a-e86a-fbb3-ca28-bf8e9092a6c0" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.877 [INFO][5675] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" iface="eth0" netns="/var/run/netns/cni-01168f1a-e86a-fbb3-ca28-bf8e9092a6c0" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.877 [INFO][5675] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" iface="eth0" netns="/var/run/netns/cni-01168f1a-e86a-fbb3-ca28-bf8e9092a6c0" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.877 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.877 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.888 [INFO][5694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.888 [INFO][5694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.888 [INFO][5694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.892 [WARNING][5694] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.892 [INFO][5694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.893 [INFO][5694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:24.895244 containerd[1817]: 2025-08-13 08:42:24.894 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:24.895712 containerd[1817]: time="2025-08-13T08:42:24.895317940Z" level=info msg="TearDown network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\" successfully" Aug 13 08:42:24.895712 containerd[1817]: time="2025-08-13T08:42:24.895335173Z" level=info msg="StopPodSandbox for \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\" returns successfully" Aug 13 08:42:24.895804 containerd[1817]: time="2025-08-13T08:42:24.895785224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658b4bf6b-zt564,Uid:5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7,Namespace:calico-apiserver,Attempt:1,}" Aug 13 08:42:24.897391 systemd[1]: run-netns-cni\x2d01168f1a\x2de86a\x2dfbb3\x2dca28\x2dbf8e9092a6c0.mount: Deactivated successfully. Aug 13 08:42:24.960794 systemd-networkd[1619]: calicdafc933abe: Link UP Aug 13 08:42:24.960948 systemd-networkd[1619]: calicdafc933abe: Gained carrier Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.918 [INFO][5705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0 calico-apiserver-658b4bf6b- calico-apiserver 5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7 952 0 2025-08-13 08:41:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:658b4bf6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-711ae8cc9f calico-apiserver-658b4bf6b-zt564 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicdafc933abe [] [] }} ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-zt564" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.918 [INFO][5705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-zt564" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.931 [INFO][5726] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" HandleID="k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.931 [INFO][5726] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" HandleID="k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a58e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-711ae8cc9f", "pod":"calico-apiserver-658b4bf6b-zt564", "timestamp":"2025-08-13 08:42:24.931020692 +0000 UTC"}, Hostname:"ci-4081.3.5-a-711ae8cc9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.931 [INFO][5726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.931 [INFO][5726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.931 [INFO][5726] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-711ae8cc9f' Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.935 [INFO][5726] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.938 [INFO][5726] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.941 [INFO][5726] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.943 [INFO][5726] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.944 [INFO][5726] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.944 [INFO][5726] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.945 [INFO][5726] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75 Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.948 [INFO][5726] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.951 [INFO][5726] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.133/26] block=192.168.79.128/26 handle="k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.951 [INFO][5726] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.133/26] handle="k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.951 [INFO][5726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:24.965946 containerd[1817]: 2025-08-13 08:42:24.951 [INFO][5726] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.133/26] IPv6=[] ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" HandleID="k8s-pod-network.159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.966379 containerd[1817]: 2025-08-13 08:42:24.955 [INFO][5705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-zt564" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0", GenerateName:"calico-apiserver-658b4bf6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658b4bf6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"", Pod:"calico-apiserver-658b4bf6b-zt564", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdafc933abe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:24.966379 containerd[1817]: 2025-08-13 08:42:24.955 [INFO][5705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.133/32] ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-zt564" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.966379 containerd[1817]: 2025-08-13 08:42:24.955 [INFO][5705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdafc933abe ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-zt564" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.966379 containerd[1817]: 2025-08-13 08:42:24.957 [INFO][5705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-zt564" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.966379 containerd[1817]: 2025-08-13 08:42:24.957 [INFO][5705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-zt564" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0", GenerateName:"calico-apiserver-658b4bf6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658b4bf6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75", Pod:"calico-apiserver-658b4bf6b-zt564", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdafc933abe", MAC:"ee:9a:d5:f3:fb:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:24.966379 containerd[1817]: 2025-08-13 08:42:24.964 [INFO][5705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-zt564" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:24.974233 containerd[1817]: time="2025-08-13T08:42:24.974191365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:24.974233 containerd[1817]: time="2025-08-13T08:42:24.974222695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:24.974233 containerd[1817]: time="2025-08-13T08:42:24.974229913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:24.974325 containerd[1817]: time="2025-08-13T08:42:24.974271573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:24.982264 systemd-networkd[1619]: cali8fe1bfa28a1: Gained IPv6LL Aug 13 08:42:24.987559 systemd[1]: Started cri-containerd-159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75.scope - libcontainer container 159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75. Aug 13 08:42:25.010038 containerd[1817]: time="2025-08-13T08:42:25.010017167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658b4bf6b-zt564,Uid:5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75\"" Aug 13 08:42:25.174337 systemd-networkd[1619]: cali0dd7a301021: Gained IPv6LL Aug 13 08:42:25.567177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934474244.mount: Deactivated successfully. Aug 13 08:42:25.765741 containerd[1817]: time="2025-08-13T08:42:25.765692560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:25.765978 containerd[1817]: time="2025-08-13T08:42:25.765927594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 08:42:25.766253 containerd[1817]: time="2025-08-13T08:42:25.766187346Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:25.767410 containerd[1817]: time="2025-08-13T08:42:25.767369165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:25.767877 containerd[1817]: time="2025-08-13T08:42:25.767866779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.662008384s" Aug 13 08:42:25.767898 containerd[1817]: time="2025-08-13T08:42:25.767881703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 08:42:25.768405 containerd[1817]: time="2025-08-13T08:42:25.768392482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 08:42:25.768950 containerd[1817]: time="2025-08-13T08:42:25.768936189Z" level=info msg="CreateContainer within sandbox \"75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 08:42:25.773708 containerd[1817]: time="2025-08-13T08:42:25.773687020Z" level=info msg="CreateContainer within sandbox \"75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4b000f9e49f31d099871f15c2022f1ccfac32c98715610ef0a6ba4a6c13a2c57\"" Aug 13 08:42:25.773939 containerd[1817]: time="2025-08-13T08:42:25.773900144Z" level=info msg="StartContainer for \"4b000f9e49f31d099871f15c2022f1ccfac32c98715610ef0a6ba4a6c13a2c57\"" Aug 13 08:42:25.796393 systemd[1]: Started cri-containerd-4b000f9e49f31d099871f15c2022f1ccfac32c98715610ef0a6ba4a6c13a2c57.scope - libcontainer container 4b000f9e49f31d099871f15c2022f1ccfac32c98715610ef0a6ba4a6c13a2c57. Aug 13 08:42:25.815269 systemd-networkd[1619]: cali642c1e755e1: Gained IPv6LL Aug 13 08:42:25.821092 containerd[1817]: time="2025-08-13T08:42:25.821036997Z" level=info msg="StartContainer for \"4b000f9e49f31d099871f15c2022f1ccfac32c98715610ef0a6ba4a6c13a2c57\" returns successfully" Aug 13 08:42:26.020146 kubelet[3075]: I0813 08:42:26.020057 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-fddzn" podStartSLOduration=22.357405168 podStartE2EDuration="25.020028271s" podCreationTimestamp="2025-08-13 08:42:01 +0000 UTC" firstStartedPulling="2025-08-13 08:42:23.105717253 +0000 UTC m=+37.300958238" lastFinishedPulling="2025-08-13 08:42:25.768340357 +0000 UTC m=+39.963581341" observedRunningTime="2025-08-13 08:42:26.019565008 +0000 UTC m=+40.214806049" watchObservedRunningTime="2025-08-13 08:42:26.020028271 +0000 UTC m=+40.215269285" Aug 13 08:42:26.390423 systemd-networkd[1619]: calicdafc933abe: Gained IPv6LL Aug 13 08:42:26.849407 containerd[1817]: time="2025-08-13T08:42:26.849330920Z" level=info msg="StopPodSandbox for \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\"" Aug 13 08:42:26.850237 containerd[1817]: time="2025-08-13T08:42:26.849332756Z" level=info msg="StopPodSandbox for \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\"" Aug 13 08:42:26.850237 containerd[1817]: time="2025-08-13T08:42:26.849339871Z" level=info msg="StopPodSandbox for \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\"" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.912 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.913 [INFO][5890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" iface="eth0" netns="/var/run/netns/cni-106eb31a-3df3-6582-7299-aad6461fceb2" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.913 [INFO][5890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" iface="eth0" netns="/var/run/netns/cni-106eb31a-3df3-6582-7299-aad6461fceb2" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" iface="eth0" netns="/var/run/netns/cni-106eb31a-3df3-6582-7299-aad6461fceb2" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.935 [INFO][5936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.935 [INFO][5936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.935 [INFO][5936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.938 [WARNING][5936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.938 [INFO][5936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.939 [INFO][5936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:26.941416 containerd[1817]: 2025-08-13 08:42:26.940 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:26.941691 containerd[1817]: time="2025-08-13T08:42:26.941471434Z" level=info msg="TearDown network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\" successfully" Aug 13 08:42:26.941691 containerd[1817]: time="2025-08-13T08:42:26.941494436Z" level=info msg="StopPodSandbox for \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\" returns successfully" Aug 13 08:42:26.941888 containerd[1817]: time="2025-08-13T08:42:26.941876003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqv6d,Uid:1b313c34-ec50-4ab0-8732-819433f220ba,Namespace:kube-system,Attempt:1,}" Aug 13 08:42:26.943177 systemd[1]: run-netns-cni\x2d106eb31a\x2d3df3\x2d6582\x2d7299\x2daad6461fceb2.mount: Deactivated successfully. Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.913 [INFO][5891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.913 [INFO][5891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" iface="eth0" netns="/var/run/netns/cni-275a11e8-2dd6-a825-3cef-2ab4ff00b0c2" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" iface="eth0" netns="/var/run/netns/cni-275a11e8-2dd6-a825-3cef-2ab4ff00b0c2" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" iface="eth0" netns="/var/run/netns/cni-275a11e8-2dd6-a825-3cef-2ab4ff00b0c2" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.915 [INFO][5891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.935 [INFO][5940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.935 [INFO][5940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.939 [INFO][5940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.942 [WARNING][5940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.943 [INFO][5940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.943 [INFO][5940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:26.945310 containerd[1817]: 2025-08-13 08:42:26.944 [INFO][5891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:26.945723 containerd[1817]: time="2025-08-13T08:42:26.945386369Z" level=info msg="TearDown network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\" successfully" Aug 13 08:42:26.945723 containerd[1817]: time="2025-08-13T08:42:26.945407569Z" level=info msg="StopPodSandbox for \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\" returns successfully" Aug 13 08:42:26.945781 containerd[1817]: time="2025-08-13T08:42:26.945746320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658b4bf6b-bvb5g,Uid:1d73c1f7-7407-44b3-82c7-2812b426db1e,Namespace:calico-apiserver,Attempt:1,}" Aug 13 08:42:26.949427 systemd[1]: run-netns-cni\x2d275a11e8\x2d2dd6\x2da825\x2d3cef\x2d2ab4ff00b0c2.mount: Deactivated successfully. Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.912 [INFO][5892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.913 [INFO][5892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" iface="eth0" netns="/var/run/netns/cni-9ce54086-48df-6a49-5f89-3aaa669675be" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.913 [INFO][5892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" iface="eth0" netns="/var/run/netns/cni-9ce54086-48df-6a49-5f89-3aaa669675be" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" iface="eth0" netns="/var/run/netns/cni-9ce54086-48df-6a49-5f89-3aaa669675be" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.914 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.935 [INFO][5938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.936 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.943 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.947 [WARNING][5938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.947 [INFO][5938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.948 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:26.950082 containerd[1817]: 2025-08-13 08:42:26.949 [INFO][5892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:26.950373 containerd[1817]: time="2025-08-13T08:42:26.950167743Z" level=info msg="TearDown network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\" successfully" Aug 13 08:42:26.950373 containerd[1817]: time="2025-08-13T08:42:26.950192003Z" level=info msg="StopPodSandbox for \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\" returns successfully" Aug 13 08:42:26.950604 containerd[1817]: time="2025-08-13T08:42:26.950566163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wdzjv,Uid:9ca82ef8-ce0c-4a00-987e-b04a91763a60,Namespace:kube-system,Attempt:1,}" Aug 13 08:42:26.953105 systemd[1]: run-netns-cni\x2d9ce54086\x2d48df\x2d6a49\x2d5f89\x2d3aaa669675be.mount: Deactivated successfully. Aug 13 08:42:26.994823 systemd-networkd[1619]: cali96b62aa8bfa: Link UP Aug 13 08:42:26.994955 systemd-networkd[1619]: cali96b62aa8bfa: Gained carrier Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.964 [INFO][5988] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0 coredns-668d6bf9bc- kube-system 1b313c34-ec50-4ab0-8732-819433f220ba 974 0 2025-08-13 08:41:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-711ae8cc9f coredns-668d6bf9bc-bqv6d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96b62aa8bfa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqv6d" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.964 [INFO][5988] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqv6d" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.977 [INFO][6054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" HandleID="k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.977 [INFO][6054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" HandleID="k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-711ae8cc9f", "pod":"coredns-668d6bf9bc-bqv6d", "timestamp":"2025-08-13 08:42:26.977157824 +0000 UTC"}, Hostname:"ci-4081.3.5-a-711ae8cc9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.977 [INFO][6054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.977 [INFO][6054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.977 [INFO][6054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-711ae8cc9f' Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.981 [INFO][6054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.983 [INFO][6054] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.985 [INFO][6054] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.986 [INFO][6054] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.987 [INFO][6054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.987 [INFO][6054] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.988 [INFO][6054] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77 Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.989 [INFO][6054] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.992 [INFO][6054] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.134/26] block=192.168.79.128/26 handle="k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.992 [INFO][6054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.134/26] handle="k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.992 [INFO][6054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:27.001140 containerd[1817]: 2025-08-13 08:42:26.992 [INFO][6054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.134/26] IPv6=[] ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" HandleID="k8s-pod-network.08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:27.001528 containerd[1817]: 2025-08-13 08:42:26.993 [INFO][5988] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqv6d" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b313c34-ec50-4ab0-8732-819433f220ba", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"", Pod:"coredns-668d6bf9bc-bqv6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96b62aa8bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:27.001528 containerd[1817]: 2025-08-13 08:42:26.994 [INFO][5988] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.134/32] ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqv6d" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:27.001528 containerd[1817]: 2025-08-13 08:42:26.994 [INFO][5988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96b62aa8bfa ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqv6d" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:27.001528 containerd[1817]: 2025-08-13 08:42:26.995 [INFO][5988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqv6d" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:27.001528 containerd[1817]: 2025-08-13 08:42:26.995 [INFO][5988] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqv6d" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b313c34-ec50-4ab0-8732-819433f220ba", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77", Pod:"coredns-668d6bf9bc-bqv6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96b62aa8bfa", MAC:"e2:11:7d:b8:4e:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:27.001528 containerd[1817]: 2025-08-13 08:42:27.000 [INFO][5988] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77" Namespace="kube-system" Pod="coredns-668d6bf9bc-bqv6d" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:27.004445 kubelet[3075]: I0813 08:42:27.004430 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:42:27.009418 containerd[1817]: time="2025-08-13T08:42:27.009324610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:27.009418 containerd[1817]: time="2025-08-13T08:42:27.009355566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:27.009418 containerd[1817]: time="2025-08-13T08:42:27.009363224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:27.009529 containerd[1817]: time="2025-08-13T08:42:27.009402057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:27.031757 systemd[1]: Started cri-containerd-08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77.scope - libcontainer container 08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77. Aug 13 08:42:27.106732 systemd-networkd[1619]: calif43ebe970f1: Link UP Aug 13 08:42:27.107016 systemd-networkd[1619]: calif43ebe970f1: Gained carrier Aug 13 08:42:27.108544 containerd[1817]: time="2025-08-13T08:42:27.108516525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bqv6d,Uid:1b313c34-ec50-4ab0-8732-819433f220ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77\"" Aug 13 08:42:27.110042 containerd[1817]: time="2025-08-13T08:42:27.110024446Z" level=info msg="CreateContainer within sandbox \"08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:26.967 [INFO][5998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0 calico-apiserver-658b4bf6b- calico-apiserver 1d73c1f7-7407-44b3-82c7-2812b426db1e 975 0 2025-08-13 08:41:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:658b4bf6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-711ae8cc9f calico-apiserver-658b4bf6b-bvb5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif43ebe970f1 [] [] }} ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-bvb5g" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:26.967 [INFO][5998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-bvb5g" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:26.980 [INFO][6060] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" HandleID="k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:26.980 [INFO][6060] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" HandleID="k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-711ae8cc9f", "pod":"calico-apiserver-658b4bf6b-bvb5g", "timestamp":"2025-08-13 08:42:26.98042699 +0000 UTC"}, Hostname:"ci-4081.3.5-a-711ae8cc9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:26.980 [INFO][6060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:26.992 [INFO][6060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:26.993 [INFO][6060] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-711ae8cc9f' Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.082 [INFO][6060] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.087 [INFO][6060] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.091 [INFO][6060] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.093 [INFO][6060] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.096 [INFO][6060] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.096 [INFO][6060] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.097 [INFO][6060] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.100 [INFO][6060] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.104 [INFO][6060] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.135/26] block=192.168.79.128/26 handle="k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.104 [INFO][6060] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.135/26] handle="k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.104 [INFO][6060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:27.114141 containerd[1817]: 2025-08-13 08:42:27.104 [INFO][6060] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.135/26] IPv6=[] ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" HandleID="k8s-pod-network.827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:27.114532 containerd[1817]: 2025-08-13 08:42:27.105 [INFO][5998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-bvb5g" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0", GenerateName:"calico-apiserver-658b4bf6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d73c1f7-7407-44b3-82c7-2812b426db1e", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658b4bf6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"", Pod:"calico-apiserver-658b4bf6b-bvb5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif43ebe970f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:27.114532 containerd[1817]: 2025-08-13 08:42:27.105 [INFO][5998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.135/32] ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-bvb5g" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:27.114532 containerd[1817]: 2025-08-13 08:42:27.105 [INFO][5998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif43ebe970f1 ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-bvb5g" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:27.114532 containerd[1817]: 2025-08-13 08:42:27.107 [INFO][5998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-bvb5g" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:27.114532 containerd[1817]: 2025-08-13 08:42:27.107 [INFO][5998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-bvb5g" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0", GenerateName:"calico-apiserver-658b4bf6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d73c1f7-7407-44b3-82c7-2812b426db1e", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658b4bf6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a", Pod:"calico-apiserver-658b4bf6b-bvb5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif43ebe970f1", MAC:"e6:36:0a:06:ec:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:27.114532 containerd[1817]: 2025-08-13 08:42:27.113 [INFO][5998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a" Namespace="calico-apiserver" Pod="calico-apiserver-658b4bf6b-bvb5g" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:27.115020 containerd[1817]: time="2025-08-13T08:42:27.115003536Z" level=info msg="CreateContainer within sandbox \"08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c7db71ba3cf23eaf9dbf1826a6bf8250b391c2b2df6aaa5cc99ed99b9531acf\"" Aug 13 08:42:27.115303 containerd[1817]: time="2025-08-13T08:42:27.115290631Z" level=info msg="StartContainer for \"7c7db71ba3cf23eaf9dbf1826a6bf8250b391c2b2df6aaa5cc99ed99b9531acf\"" Aug 13 08:42:27.122360 containerd[1817]: time="2025-08-13T08:42:27.122303750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:27.122360 containerd[1817]: time="2025-08-13T08:42:27.122336404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:27.122360 containerd[1817]: time="2025-08-13T08:42:27.122346473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:27.122537 containerd[1817]: time="2025-08-13T08:42:27.122413203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:27.143549 systemd[1]: Started cri-containerd-7c7db71ba3cf23eaf9dbf1826a6bf8250b391c2b2df6aaa5cc99ed99b9531acf.scope - libcontainer container 7c7db71ba3cf23eaf9dbf1826a6bf8250b391c2b2df6aaa5cc99ed99b9531acf. Aug 13 08:42:27.145254 systemd[1]: Started cri-containerd-827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a.scope - libcontainer container 827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a. Aug 13 08:42:27.155678 containerd[1817]: time="2025-08-13T08:42:27.155655857Z" level=info msg="StartContainer for \"7c7db71ba3cf23eaf9dbf1826a6bf8250b391c2b2df6aaa5cc99ed99b9531acf\" returns successfully" Aug 13 08:42:27.169862 containerd[1817]: time="2025-08-13T08:42:27.169838921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-658b4bf6b-bvb5g,Uid:1d73c1f7-7407-44b3-82c7-2812b426db1e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a\"" Aug 13 08:42:27.211606 systemd-networkd[1619]: cali55fed4ec012: Link UP Aug 13 08:42:27.212498 systemd-networkd[1619]: cali55fed4ec012: Gained carrier Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:26.972 [INFO][6021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0 coredns-668d6bf9bc- kube-system 9ca82ef8-ce0c-4a00-987e-b04a91763a60 973 0 2025-08-13 08:41:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-711ae8cc9f coredns-668d6bf9bc-wdzjv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali55fed4ec012 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdzjv" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:26.972 [INFO][6021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdzjv" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:26.984 [INFO][6073] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" HandleID="k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:26.984 [INFO][6073] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" HandleID="k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f760), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-711ae8cc9f", "pod":"coredns-668d6bf9bc-wdzjv", "timestamp":"2025-08-13 08:42:26.984436819 +0000 UTC"}, Hostname:"ci-4081.3.5-a-711ae8cc9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:26.984 [INFO][6073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.104 [INFO][6073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.104 [INFO][6073] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-711ae8cc9f' Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.182 [INFO][6073] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.187 [INFO][6073] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.190 [INFO][6073] ipam/ipam.go 511: Trying affinity for 192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.191 [INFO][6073] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.192 [INFO][6073] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.128/26 host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.192 [INFO][6073] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.128/26 handle="k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.193 [INFO][6073] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86 Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.195 [INFO][6073] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.128/26 handle="k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.201 [INFO][6073] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.136/26] block=192.168.79.128/26 handle="k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.201 [INFO][6073] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.136/26] handle="k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" host="ci-4081.3.5-a-711ae8cc9f" Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.201 [INFO][6073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:27.230030 containerd[1817]: 2025-08-13 08:42:27.201 [INFO][6073] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.136/26] IPv6=[] ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" HandleID="k8s-pod-network.f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:27.230545 containerd[1817]: 2025-08-13 08:42:27.206 [INFO][6021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdzjv" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9ca82ef8-ce0c-4a00-987e-b04a91763a60", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"", Pod:"coredns-668d6bf9bc-wdzjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55fed4ec012", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:27.230545 containerd[1817]: 2025-08-13 08:42:27.206 [INFO][6021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.136/32] ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdzjv" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:27.230545 containerd[1817]: 2025-08-13 08:42:27.206 [INFO][6021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55fed4ec012 ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdzjv" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:27.230545 containerd[1817]: 2025-08-13 08:42:27.214 [INFO][6021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdzjv" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:27.230545 containerd[1817]: 2025-08-13 08:42:27.214 [INFO][6021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdzjv" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9ca82ef8-ce0c-4a00-987e-b04a91763a60", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86", Pod:"coredns-668d6bf9bc-wdzjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55fed4ec012", MAC:"02:58:0b:79:8d:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:27.230545 containerd[1817]: 2025-08-13 08:42:27.228 [INFO][6021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdzjv" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:27.238829 containerd[1817]: time="2025-08-13T08:42:27.238605188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 08:42:27.238829 containerd[1817]: time="2025-08-13T08:42:27.238815238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 08:42:27.238829 containerd[1817]: time="2025-08-13T08:42:27.238823180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:27.238928 containerd[1817]: time="2025-08-13T08:42:27.238864892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 08:42:27.267685 systemd[1]: Started cri-containerd-f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86.scope - libcontainer container f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86. Aug 13 08:42:27.361205 containerd[1817]: time="2025-08-13T08:42:27.361118865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wdzjv,Uid:9ca82ef8-ce0c-4a00-987e-b04a91763a60,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86\"" Aug 13 08:42:27.362957 containerd[1817]: time="2025-08-13T08:42:27.362936115Z" level=info msg="CreateContainer within sandbox \"f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 08:42:27.368180 containerd[1817]: time="2025-08-13T08:42:27.368130329Z" level=info msg="CreateContainer within sandbox \"f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a6dd2e43f2557926ef62ad0cb373da9cd4034b3b3e0719c6691f5548d856448\"" Aug 13 08:42:27.368393 containerd[1817]: time="2025-08-13T08:42:27.368354514Z" level=info msg="StartContainer for \"1a6dd2e43f2557926ef62ad0cb373da9cd4034b3b3e0719c6691f5548d856448\"" Aug 13 08:42:27.390363 systemd[1]: Started cri-containerd-1a6dd2e43f2557926ef62ad0cb373da9cd4034b3b3e0719c6691f5548d856448.scope - libcontainer container 1a6dd2e43f2557926ef62ad0cb373da9cd4034b3b3e0719c6691f5548d856448. Aug 13 08:42:27.401805 containerd[1817]: time="2025-08-13T08:42:27.401766654Z" level=info msg="StartContainer for \"1a6dd2e43f2557926ef62ad0cb373da9cd4034b3b3e0719c6691f5548d856448\" returns successfully" Aug 13 08:42:28.027659 kubelet[3075]: I0813 08:42:28.027623 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wdzjv" podStartSLOduration=37.027610712 podStartE2EDuration="37.027610712s" podCreationTimestamp="2025-08-13 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 08:42:28.027508974 +0000 UTC m=+42.222749959" watchObservedRunningTime="2025-08-13 08:42:28.027610712 +0000 UTC m=+42.222851694" Aug 13 08:42:28.038136 kubelet[3075]: I0813 08:42:28.038100 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bqv6d" podStartSLOduration=37.038087673 podStartE2EDuration="37.038087673s" podCreationTimestamp="2025-08-13 08:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 08:42:28.037814034 +0000 UTC m=+42.233055018" watchObservedRunningTime="2025-08-13 08:42:28.038087673 +0000 UTC m=+42.233328655" Aug 13 08:42:28.375458 systemd-networkd[1619]: cali55fed4ec012: Gained IPv6LL Aug 13 08:42:28.376364 systemd-networkd[1619]: cali96b62aa8bfa: Gained IPv6LL Aug 13 08:42:28.752013 containerd[1817]: time="2025-08-13T08:42:28.751951191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:28.752304 containerd[1817]: time="2025-08-13T08:42:28.752113429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 08:42:28.752503 containerd[1817]: time="2025-08-13T08:42:28.752489066Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:28.753565 containerd[1817]: time="2025-08-13T08:42:28.753551316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:28.754015 containerd[1817]: time="2025-08-13T08:42:28.754000966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.985590906s" Aug 13 08:42:28.754050 containerd[1817]: time="2025-08-13T08:42:28.754017276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 08:42:28.754530 containerd[1817]: time="2025-08-13T08:42:28.754484129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 08:42:28.757395 containerd[1817]: time="2025-08-13T08:42:28.757380311Z" level=info msg="CreateContainer within sandbox \"ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 08:42:28.761648 containerd[1817]: time="2025-08-13T08:42:28.761602325Z" level=info msg="CreateContainer within sandbox \"ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"11ce0d79500efd100f5115ead4281d694aec6bc92ba090681aad63c83a08a505\"" Aug 13 08:42:28.761837 containerd[1817]: time="2025-08-13T08:42:28.761826080Z" level=info msg="StartContainer for \"11ce0d79500efd100f5115ead4281d694aec6bc92ba090681aad63c83a08a505\"" Aug 13 08:42:28.782450 systemd[1]: Started cri-containerd-11ce0d79500efd100f5115ead4281d694aec6bc92ba090681aad63c83a08a505.scope - libcontainer container 11ce0d79500efd100f5115ead4281d694aec6bc92ba090681aad63c83a08a505. Aug 13 08:42:28.816970 containerd[1817]: time="2025-08-13T08:42:28.816937770Z" level=info msg="StartContainer for \"11ce0d79500efd100f5115ead4281d694aec6bc92ba090681aad63c83a08a505\" returns successfully" Aug 13 08:42:29.014658 systemd-networkd[1619]: calif43ebe970f1: Gained IPv6LL Aug 13 08:42:29.047635 kubelet[3075]: I0813 08:42:29.047523 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6468cf5bfc-jkn9n" podStartSLOduration=23.356957945 podStartE2EDuration="28.047487428s" podCreationTimestamp="2025-08-13 08:42:01 +0000 UTC" firstStartedPulling="2025-08-13 08:42:24.063883631 +0000 UTC m=+38.259124618" lastFinishedPulling="2025-08-13 08:42:28.754413116 +0000 UTC m=+42.949654101" observedRunningTime="2025-08-13 08:42:29.046248757 +0000 UTC m=+43.241489835" watchObservedRunningTime="2025-08-13 08:42:29.047487428 +0000 UTC m=+43.242728466" Aug 13 08:42:29.270875 kubelet[3075]: I0813 08:42:29.270778 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:42:30.026329 kubelet[3075]: I0813 08:42:30.026307 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:42:30.528765 containerd[1817]: time="2025-08-13T08:42:30.528713914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:30.544900 containerd[1817]: time="2025-08-13T08:42:30.544850679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 08:42:30.545274 containerd[1817]: time="2025-08-13T08:42:30.545228076Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:30.546352 containerd[1817]: time="2025-08-13T08:42:30.546311046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:30.547792 containerd[1817]: time="2025-08-13T08:42:30.547767568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.793265431s" Aug 13 08:42:30.547832 containerd[1817]: time="2025-08-13T08:42:30.547796404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 08:42:30.548340 containerd[1817]: time="2025-08-13T08:42:30.548300310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 08:42:30.548799 containerd[1817]: time="2025-08-13T08:42:30.548785317Z" level=info msg="CreateContainer within sandbox \"d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 08:42:30.554384 containerd[1817]: time="2025-08-13T08:42:30.554340578Z" level=info msg="CreateContainer within sandbox \"d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"745c8ad13598f4b5bc5ad912bbe346520f164bb460cfc4e1ae266f25b82d1eb4\"" Aug 13 08:42:30.554619 containerd[1817]: time="2025-08-13T08:42:30.554584879Z" level=info msg="StartContainer for \"745c8ad13598f4b5bc5ad912bbe346520f164bb460cfc4e1ae266f25b82d1eb4\"" Aug 13 08:42:30.581296 systemd[1]: Started cri-containerd-745c8ad13598f4b5bc5ad912bbe346520f164bb460cfc4e1ae266f25b82d1eb4.scope - libcontainer container 745c8ad13598f4b5bc5ad912bbe346520f164bb460cfc4e1ae266f25b82d1eb4. Aug 13 08:42:30.593712 containerd[1817]: time="2025-08-13T08:42:30.593693610Z" level=info msg="StartContainer for \"745c8ad13598f4b5bc5ad912bbe346520f164bb460cfc4e1ae266f25b82d1eb4\" returns successfully" Aug 13 08:42:33.413738 containerd[1817]: time="2025-08-13T08:42:33.413679763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:33.413992 containerd[1817]: time="2025-08-13T08:42:33.413879881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 08:42:33.414355 containerd[1817]: time="2025-08-13T08:42:33.414342226Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:33.415370 containerd[1817]: time="2025-08-13T08:42:33.415358955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:33.415918 containerd[1817]: time="2025-08-13T08:42:33.415901727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.867583725s" Aug 13 08:42:33.415944 containerd[1817]: time="2025-08-13T08:42:33.415923941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 08:42:33.416471 containerd[1817]: time="2025-08-13T08:42:33.416431997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 08:42:33.416968 containerd[1817]: time="2025-08-13T08:42:33.416956427Z" level=info msg="CreateContainer within sandbox \"159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 08:42:33.420834 containerd[1817]: time="2025-08-13T08:42:33.420788663Z" level=info msg="CreateContainer within sandbox \"159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f4f27358ebd8f67849ef0a9abc920484560a81a1d12e32ce4f976c64c8d197b7\"" Aug 13 08:42:33.421059 containerd[1817]: time="2025-08-13T08:42:33.421042501Z" level=info msg="StartContainer for \"f4f27358ebd8f67849ef0a9abc920484560a81a1d12e32ce4f976c64c8d197b7\"" Aug 13 08:42:33.455466 systemd[1]: Started cri-containerd-f4f27358ebd8f67849ef0a9abc920484560a81a1d12e32ce4f976c64c8d197b7.scope - libcontainer container f4f27358ebd8f67849ef0a9abc920484560a81a1d12e32ce4f976c64c8d197b7. Aug 13 08:42:33.486222 containerd[1817]: time="2025-08-13T08:42:33.486171697Z" level=info msg="StartContainer for \"f4f27358ebd8f67849ef0a9abc920484560a81a1d12e32ce4f976c64c8d197b7\" returns successfully" Aug 13 08:42:33.828279 containerd[1817]: time="2025-08-13T08:42:33.828222206Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:33.828460 containerd[1817]: time="2025-08-13T08:42:33.828409959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 08:42:33.829691 containerd[1817]: time="2025-08-13T08:42:33.829655340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 413.203782ms" Aug 13 08:42:33.829691 containerd[1817]: time="2025-08-13T08:42:33.829669862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 08:42:33.830162 containerd[1817]: time="2025-08-13T08:42:33.830125373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 08:42:33.830767 containerd[1817]: time="2025-08-13T08:42:33.830755688Z" level=info msg="CreateContainer within sandbox \"827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 08:42:33.834595 containerd[1817]: time="2025-08-13T08:42:33.834577768Z" level=info msg="CreateContainer within sandbox \"827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb32ba4fa73afea23e3c372c3857f77f7a27a11677f25e9adcc1d20511585bd8\"" Aug 13 08:42:33.834897 containerd[1817]: time="2025-08-13T08:42:33.834880467Z" level=info msg="StartContainer for \"eb32ba4fa73afea23e3c372c3857f77f7a27a11677f25e9adcc1d20511585bd8\"" Aug 13 08:42:33.855359 systemd[1]: Started cri-containerd-eb32ba4fa73afea23e3c372c3857f77f7a27a11677f25e9adcc1d20511585bd8.scope - libcontainer container eb32ba4fa73afea23e3c372c3857f77f7a27a11677f25e9adcc1d20511585bd8. Aug 13 08:42:33.879923 containerd[1817]: time="2025-08-13T08:42:33.879899889Z" level=info msg="StartContainer for \"eb32ba4fa73afea23e3c372c3857f77f7a27a11677f25e9adcc1d20511585bd8\" returns successfully" Aug 13 08:42:34.047150 kubelet[3075]: I0813 08:42:34.047106 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-658b4bf6b-bvb5g" podStartSLOduration=28.387665277 podStartE2EDuration="35.047094486s" podCreationTimestamp="2025-08-13 08:41:59 +0000 UTC" firstStartedPulling="2025-08-13 08:42:27.170634244 +0000 UTC m=+41.365875235" lastFinishedPulling="2025-08-13 08:42:33.830063457 +0000 UTC m=+48.025304444" observedRunningTime="2025-08-13 08:42:34.047029109 +0000 UTC m=+48.242270098" watchObservedRunningTime="2025-08-13 08:42:34.047094486 +0000 UTC m=+48.242335467" Aug 13 08:42:34.051805 kubelet[3075]: I0813 08:42:34.051773 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-658b4bf6b-zt564" podStartSLOduration=26.645988483 podStartE2EDuration="35.051761108s" podCreationTimestamp="2025-08-13 08:41:59 +0000 UTC" firstStartedPulling="2025-08-13 08:42:25.010582401 +0000 UTC m=+39.205823385" lastFinishedPulling="2025-08-13 08:42:33.416355027 +0000 UTC m=+47.611596010" observedRunningTime="2025-08-13 08:42:34.051488482 +0000 UTC m=+48.246729476" watchObservedRunningTime="2025-08-13 08:42:34.051761108 +0000 UTC m=+48.247002090" Aug 13 08:42:35.042361 kubelet[3075]: I0813 08:42:35.042308 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:42:35.042456 kubelet[3075]: I0813 08:42:35.042308 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:42:35.703700 containerd[1817]: time="2025-08-13T08:42:35.703647502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:35.703930 containerd[1817]: time="2025-08-13T08:42:35.703881298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 08:42:35.704214 containerd[1817]: time="2025-08-13T08:42:35.704196502Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:35.705146 containerd[1817]: time="2025-08-13T08:42:35.705131810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 08:42:35.705619 containerd[1817]: time="2025-08-13T08:42:35.705594602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.875452935s" Aug 13 08:42:35.705619 containerd[1817]: time="2025-08-13T08:42:35.705612812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 08:42:35.706507 containerd[1817]: time="2025-08-13T08:42:35.706494271Z" level=info msg="CreateContainer within sandbox \"d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 08:42:35.711285 containerd[1817]: time="2025-08-13T08:42:35.711243182Z" level=info msg="CreateContainer within sandbox \"d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f2a88e880b57e723f3830d1e094e7804930455f6a0f659676dcda0f4a1ce6e6f\"" Aug 13 08:42:35.711547 containerd[1817]: time="2025-08-13T08:42:35.711494338Z" level=info msg="StartContainer for \"f2a88e880b57e723f3830d1e094e7804930455f6a0f659676dcda0f4a1ce6e6f\"" Aug 13 08:42:35.733479 systemd[1]: Started cri-containerd-f2a88e880b57e723f3830d1e094e7804930455f6a0f659676dcda0f4a1ce6e6f.scope - libcontainer container f2a88e880b57e723f3830d1e094e7804930455f6a0f659676dcda0f4a1ce6e6f. Aug 13 08:42:35.746288 containerd[1817]: time="2025-08-13T08:42:35.746267832Z" level=info msg="StartContainer for \"f2a88e880b57e723f3830d1e094e7804930455f6a0f659676dcda0f4a1ce6e6f\" returns successfully" Aug 13 08:42:35.888864 kubelet[3075]: I0813 08:42:35.888802 3075 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 08:42:35.888864 kubelet[3075]: I0813 08:42:35.888837 3075 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 08:42:36.070528 kubelet[3075]: I0813 08:42:36.070313 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9rjlh" podStartSLOduration=23.632632671 podStartE2EDuration="35.070276612s" podCreationTimestamp="2025-08-13 08:42:01 +0000 UTC" firstStartedPulling="2025-08-13 08:42:24.268306554 +0000 UTC m=+38.463547605" lastFinishedPulling="2025-08-13 08:42:35.705950562 +0000 UTC m=+49.901191546" observedRunningTime="2025-08-13 08:42:36.069550128 +0000 UTC m=+50.264791188" watchObservedRunningTime="2025-08-13 08:42:36.070276612 +0000 UTC m=+50.265517641" Aug 13 08:42:41.070702 kubelet[3075]: I0813 08:42:41.070490 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:42:45.844626 containerd[1817]: time="2025-08-13T08:42:45.844570051Z" level=info msg="StopPodSandbox for \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\"" Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.863 [WARNING][6778] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bcdd4219-dd3a-4d45-91ce-fcd49d31398a", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3", Pod:"goldmane-768f4c5c69-fddzn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8fe1bfa28a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.863 [INFO][6778] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.863 [INFO][6778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" iface="eth0" netns="" Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.863 [INFO][6778] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.863 [INFO][6778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.874 [INFO][6795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.874 [INFO][6795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.874 [INFO][6795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.878 [WARNING][6795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.878 [INFO][6795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.879 [INFO][6795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:45.881576 containerd[1817]: 2025-08-13 08:42:45.880 [INFO][6778] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:45.881887 containerd[1817]: time="2025-08-13T08:42:45.881605862Z" level=info msg="TearDown network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\" successfully" Aug 13 08:42:45.881887 containerd[1817]: time="2025-08-13T08:42:45.881623323Z" level=info msg="StopPodSandbox for \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\" returns successfully" Aug 13 08:42:45.881887 containerd[1817]: time="2025-08-13T08:42:45.881868153Z" level=info msg="RemovePodSandbox for \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\"" Aug 13 08:42:45.881887 containerd[1817]: time="2025-08-13T08:42:45.881885027Z" level=info msg="Forcibly stopping sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\"" Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.900 [WARNING][6821] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bcdd4219-dd3a-4d45-91ce-fcd49d31398a", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"75918a33f01ca97ff775420f6e1a7bf5e6e5eb54824406e9b421239ef1b9b2b3", Pod:"goldmane-768f4c5c69-fddzn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8fe1bfa28a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.900 [INFO][6821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.900 [INFO][6821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" iface="eth0" netns="" Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.900 [INFO][6821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.900 [INFO][6821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.910 [INFO][6836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.910 [INFO][6836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.910 [INFO][6836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.914 [WARNING][6836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.914 [INFO][6836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" HandleID="k8s-pod-network.4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-goldmane--768f4c5c69--fddzn-eth0" Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.916 [INFO][6836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:45.917624 containerd[1817]: 2025-08-13 08:42:45.916 [INFO][6821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde" Aug 13 08:42:45.917624 containerd[1817]: time="2025-08-13T08:42:45.917612830Z" level=info msg="TearDown network for sandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\" successfully" Aug 13 08:42:45.919244 containerd[1817]: time="2025-08-13T08:42:45.919226881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 08:42:45.919282 containerd[1817]: time="2025-08-13T08:42:45.919261674Z" level=info msg="RemovePodSandbox \"4b3d732c4995fad29553900e9628c60827e9eb0d4c8b46784d982fee437c8fde\" returns successfully" Aug 13 08:42:45.919543 containerd[1817]: time="2025-08-13T08:42:45.919533289Z" level=info msg="StopPodSandbox for \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\"" Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.936 [WARNING][6861] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0", GenerateName:"calico-kube-controllers-6468cf5bfc-", Namespace:"calico-system", SelfLink:"", UID:"e4fc3133-4463-408d-975f-642106e09eac", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6468cf5bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429", Pod:"calico-kube-controllers-6468cf5bfc-jkn9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0dd7a301021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.936 [INFO][6861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.936 [INFO][6861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" iface="eth0" netns="" Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.936 [INFO][6861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.936 [INFO][6861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.947 [INFO][6877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.947 [INFO][6877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.947 [INFO][6877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.951 [WARNING][6877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.951 [INFO][6877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.952 [INFO][6877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:45.954464 containerd[1817]: 2025-08-13 08:42:45.953 [INFO][6861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:45.954814 containerd[1817]: time="2025-08-13T08:42:45.954483889Z" level=info msg="TearDown network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\" successfully" Aug 13 08:42:45.954814 containerd[1817]: time="2025-08-13T08:42:45.954501475Z" level=info msg="StopPodSandbox for \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\" returns successfully" Aug 13 08:42:45.954814 containerd[1817]: time="2025-08-13T08:42:45.954791801Z" level=info msg="RemovePodSandbox for \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\"" Aug 13 08:42:45.954876 containerd[1817]: time="2025-08-13T08:42:45.954815470Z" level=info msg="Forcibly stopping sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\"" Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.973 [WARNING][6900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0", GenerateName:"calico-kube-controllers-6468cf5bfc-", Namespace:"calico-system", SelfLink:"", UID:"e4fc3133-4463-408d-975f-642106e09eac", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6468cf5bfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"ff5f16a029a27e19949255ccb5069bc9965652a837835b40a193d3e0aee38429", Pod:"calico-kube-controllers-6468cf5bfc-jkn9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0dd7a301021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.974 [INFO][6900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.974 [INFO][6900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" iface="eth0" netns="" Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.974 [INFO][6900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.974 [INFO][6900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.985 [INFO][6917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.985 [INFO][6917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.985 [INFO][6917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.990 [WARNING][6917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.990 [INFO][6917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" HandleID="k8s-pod-network.5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--kube--controllers--6468cf5bfc--jkn9n-eth0" Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.991 [INFO][6917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:45.993224 containerd[1817]: 2025-08-13 08:42:45.992 [INFO][6900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2" Aug 13 08:42:45.993653 containerd[1817]: time="2025-08-13T08:42:45.993249643Z" level=info msg="TearDown network for sandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\" successfully" Aug 13 08:42:45.995204 containerd[1817]: time="2025-08-13T08:42:45.995157890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 08:42:45.995242 containerd[1817]: time="2025-08-13T08:42:45.995205814Z" level=info msg="RemovePodSandbox \"5474df809756bc6229c4255c94cccef8a440b1bf4e95bc8e1d55e987241dbee2\" returns successfully" Aug 13 08:42:45.995483 containerd[1817]: time="2025-08-13T08:42:45.995442999Z" level=info msg="StopPodSandbox for \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\"" Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.012 [WARNING][6943] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5be6b7a1-9903-4a08-a181-3b1833ebdcac", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814", Pod:"csi-node-driver-9rjlh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali642c1e755e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.012 [INFO][6943] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.012 [INFO][6943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" iface="eth0" netns="" Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.012 [INFO][6943] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.012 [INFO][6943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.023 [INFO][6957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.023 [INFO][6957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.023 [INFO][6957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.027 [WARNING][6957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.027 [INFO][6957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.029 [INFO][6957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.030847 containerd[1817]: 2025-08-13 08:42:46.030 [INFO][6943] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:46.031227 containerd[1817]: time="2025-08-13T08:42:46.030874891Z" level=info msg="TearDown network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\" successfully" Aug 13 08:42:46.031227 containerd[1817]: time="2025-08-13T08:42:46.030893463Z" level=info msg="StopPodSandbox for \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\" returns successfully" Aug 13 08:42:46.031227 containerd[1817]: time="2025-08-13T08:42:46.031185815Z" level=info msg="RemovePodSandbox for \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\"" Aug 13 08:42:46.031227 containerd[1817]: time="2025-08-13T08:42:46.031206724Z" level=info msg="Forcibly stopping sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\"" Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.054 [WARNING][6979] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5be6b7a1-9903-4a08-a181-3b1833ebdcac", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"d7ad22b6fef2c4f9d66f623e9669bb1190b8c9374ba5943352ef033e5e448814", Pod:"csi-node-driver-9rjlh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali642c1e755e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.054 [INFO][6979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.054 [INFO][6979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" iface="eth0" netns="" Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.054 [INFO][6979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.054 [INFO][6979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.069 [INFO][6998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.069 [INFO][6998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.069 [INFO][6998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.074 [WARNING][6998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.074 [INFO][6998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" HandleID="k8s-pod-network.f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-csi--node--driver--9rjlh-eth0" Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.076 [INFO][6998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.078156 containerd[1817]: 2025-08-13 08:42:46.077 [INFO][6979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f" Aug 13 08:42:46.078552 containerd[1817]: time="2025-08-13T08:42:46.078190416Z" level=info msg="TearDown network for sandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\" successfully" Aug 13 08:42:46.080164 containerd[1817]: time="2025-08-13T08:42:46.080122107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 08:42:46.080164 containerd[1817]: time="2025-08-13T08:42:46.080155704Z" level=info msg="RemovePodSandbox \"f1a62f89c64488cb9d67010de29389ee24ae0bef15da745217c8f84f04a1fb2f\" returns successfully" Aug 13 08:42:46.080417 containerd[1817]: time="2025-08-13T08:42:46.080378803Z" level=info msg="StopPodSandbox for \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\"" Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.097 [WARNING][7027] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b313c34-ec50-4ab0-8732-819433f220ba", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77", Pod:"coredns-668d6bf9bc-bqv6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96b62aa8bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.097 [INFO][7027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.097 [INFO][7027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" iface="eth0" netns="" Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.097 [INFO][7027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.097 [INFO][7027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.107 [INFO][7045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.107 [INFO][7045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.107 [INFO][7045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.111 [WARNING][7045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.111 [INFO][7045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.112 [INFO][7045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.114104 containerd[1817]: 2025-08-13 08:42:46.113 [INFO][7027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:46.114386 containerd[1817]: time="2025-08-13T08:42:46.114095753Z" level=info msg="TearDown network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\" successfully" Aug 13 08:42:46.114386 containerd[1817]: time="2025-08-13T08:42:46.114115530Z" level=info msg="StopPodSandbox for \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\" returns successfully" Aug 13 08:42:46.114419 containerd[1817]: time="2025-08-13T08:42:46.114383163Z" level=info msg="RemovePodSandbox for \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\"" Aug 13 08:42:46.114419 containerd[1817]: time="2025-08-13T08:42:46.114404297Z" level=info msg="Forcibly stopping sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\"" Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.132 [WARNING][7068] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b313c34-ec50-4ab0-8732-819433f220ba", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"08937e1961913f3ada91ea7fabc530958a1ad6acbd1249d2930ed003e6e0fe77", Pod:"coredns-668d6bf9bc-bqv6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96b62aa8bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.132 [INFO][7068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.132 [INFO][7068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" iface="eth0" netns="" Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.132 [INFO][7068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.132 [INFO][7068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.142 [INFO][7081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.142 [INFO][7081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.142 [INFO][7081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.146 [WARNING][7081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.146 [INFO][7081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" HandleID="k8s-pod-network.a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--bqv6d-eth0" Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.147 [INFO][7081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.148884 containerd[1817]: 2025-08-13 08:42:46.148 [INFO][7068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9" Aug 13 08:42:46.148884 containerd[1817]: time="2025-08-13T08:42:46.148872188Z" level=info msg="TearDown network for sandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\" successfully" Aug 13 08:42:46.150399 containerd[1817]: time="2025-08-13T08:42:46.150357965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 08:42:46.150399 containerd[1817]: time="2025-08-13T08:42:46.150388395Z" level=info msg="RemovePodSandbox \"a80f6ce21b7450e08ffb545940533d0a66d6d09703a9f84cf4b9551b3e919cd9\" returns successfully" Aug 13 08:42:46.150767 containerd[1817]: time="2025-08-13T08:42:46.150704644Z" level=info msg="StopPodSandbox for \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\"" Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.168 [WARNING][7106] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0", GenerateName:"calico-apiserver-658b4bf6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658b4bf6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75", Pod:"calico-apiserver-658b4bf6b-zt564", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdafc933abe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.168 [INFO][7106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.168 [INFO][7106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" iface="eth0" netns="" Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.168 [INFO][7106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.168 [INFO][7106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.178 [INFO][7120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.178 [INFO][7120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.178 [INFO][7120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.181 [WARNING][7120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.181 [INFO][7120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.183 [INFO][7120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.184438 containerd[1817]: 2025-08-13 08:42:46.183 [INFO][7106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:46.184758 containerd[1817]: time="2025-08-13T08:42:46.184461465Z" level=info msg="TearDown network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\" successfully" Aug 13 08:42:46.184758 containerd[1817]: time="2025-08-13T08:42:46.184477397Z" level=info msg="StopPodSandbox for \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\" returns successfully" Aug 13 08:42:46.184758 containerd[1817]: time="2025-08-13T08:42:46.184740883Z" level=info msg="RemovePodSandbox for \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\"" Aug 13 08:42:46.184758 containerd[1817]: time="2025-08-13T08:42:46.184757106Z" level=info msg="Forcibly stopping sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\"" Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.201 [WARNING][7140] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0", GenerateName:"calico-apiserver-658b4bf6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5106c9bf-90a4-4cd5-ab2d-ed67b58da6b7", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658b4bf6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"159e102022392b6dc932de5f8e666539de5432eba271b2180e86545bd6a0bb75", Pod:"calico-apiserver-658b4bf6b-zt564", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdafc933abe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.202 [INFO][7140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.202 [INFO][7140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" iface="eth0" netns="" Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.202 [INFO][7140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.202 [INFO][7140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.211 [INFO][7156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.211 [INFO][7156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.211 [INFO][7156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.216 [WARNING][7156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.216 [INFO][7156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" HandleID="k8s-pod-network.83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--zt564-eth0" Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.217 [INFO][7156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.218511 containerd[1817]: 2025-08-13 08:42:46.217 [INFO][7140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21" Aug 13 08:42:46.218511 containerd[1817]: time="2025-08-13T08:42:46.218496964Z" level=info msg="TearDown network for sandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\" successfully" Aug 13 08:42:46.219924 containerd[1817]: time="2025-08-13T08:42:46.219884171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 08:42:46.219924 containerd[1817]: time="2025-08-13T08:42:46.219915466Z" level=info msg="RemovePodSandbox \"83ae62197dd8c3e29f0d6ed18804108631e6d20e33c545207aaf55a532b6ed21\" returns successfully" Aug 13 08:42:46.220214 containerd[1817]: time="2025-08-13T08:42:46.220169054Z" level=info msg="StopPodSandbox for \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\"" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.238 [WARNING][7181] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.238 [INFO][7181] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.238 [INFO][7181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" iface="eth0" netns="" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.238 [INFO][7181] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.238 [INFO][7181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.249 [INFO][7196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.249 [INFO][7196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.249 [INFO][7196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.254 [WARNING][7196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.254 [INFO][7196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.256 [INFO][7196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.258227 containerd[1817]: 2025-08-13 08:42:46.257 [INFO][7181] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:46.258615 containerd[1817]: time="2025-08-13T08:42:46.258255797Z" level=info msg="TearDown network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\" successfully" Aug 13 08:42:46.258615 containerd[1817]: time="2025-08-13T08:42:46.258281782Z" level=info msg="StopPodSandbox for \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\" returns successfully" Aug 13 08:42:46.258674 containerd[1817]: time="2025-08-13T08:42:46.258644579Z" level=info msg="RemovePodSandbox for \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\"" Aug 13 08:42:46.258707 containerd[1817]: time="2025-08-13T08:42:46.258672040Z" level=info msg="Forcibly stopping sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\"" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.286 [WARNING][7222] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" WorkloadEndpoint="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.286 [INFO][7222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.286 [INFO][7222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" iface="eth0" netns="" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.286 [INFO][7222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.287 [INFO][7222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.305 [INFO][7239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.305 [INFO][7239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.305 [INFO][7239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.311 [WARNING][7239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.311 [INFO][7239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" HandleID="k8s-pod-network.00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-whisker--69cffdcfc8--5ltj4-eth0" Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.312 [INFO][7239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.314980 containerd[1817]: 2025-08-13 08:42:46.313 [INFO][7222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841" Aug 13 08:42:46.315478 containerd[1817]: time="2025-08-13T08:42:46.315014213Z" level=info msg="TearDown network for sandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\" successfully" Aug 13 08:42:46.317205 containerd[1817]: time="2025-08-13T08:42:46.317191801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 08:42:46.317235 containerd[1817]: time="2025-08-13T08:42:46.317222937Z" level=info msg="RemovePodSandbox \"00675305bf837e83db140baf2ef218484a13aad64f7761c378e1f20b961d3841\" returns successfully" Aug 13 08:42:46.317530 containerd[1817]: time="2025-08-13T08:42:46.317518969Z" level=info msg="StopPodSandbox for \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\"" Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.335 [WARNING][7264] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9ca82ef8-ce0c-4a00-987e-b04a91763a60", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86", Pod:"coredns-668d6bf9bc-wdzjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55fed4ec012", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.335 [INFO][7264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.335 [INFO][7264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" iface="eth0" netns="" Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.335 [INFO][7264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.335 [INFO][7264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.345 [INFO][7282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.345 [INFO][7282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.345 [INFO][7282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.349 [WARNING][7282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.349 [INFO][7282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.350 [INFO][7282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.352468 containerd[1817]: 2025-08-13 08:42:46.351 [INFO][7264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:46.352468 containerd[1817]: time="2025-08-13T08:42:46.352463388Z" level=info msg="TearDown network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\" successfully" Aug 13 08:42:46.352933 containerd[1817]: time="2025-08-13T08:42:46.352481271Z" level=info msg="StopPodSandbox for \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\" returns successfully" Aug 13 08:42:46.352933 containerd[1817]: time="2025-08-13T08:42:46.352727420Z" level=info msg="RemovePodSandbox for \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\"" Aug 13 08:42:46.352933 containerd[1817]: time="2025-08-13T08:42:46.352755021Z" level=info msg="Forcibly stopping sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\"" Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.370 [WARNING][7309] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9ca82ef8-ce0c-4a00-987e-b04a91763a60", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"f5a9f1ab00c4badd313e67b1be6244126ddc155299d2a9918cabf15d4d49cc86", Pod:"coredns-668d6bf9bc-wdzjv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55fed4ec012", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.370 [INFO][7309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.371 [INFO][7309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" iface="eth0" netns="" Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.371 [INFO][7309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.371 [INFO][7309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.380 [INFO][7325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.381 [INFO][7325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.381 [INFO][7325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.384 [WARNING][7325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.384 [INFO][7325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" HandleID="k8s-pod-network.e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-coredns--668d6bf9bc--wdzjv-eth0" Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.386 [INFO][7325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.387403 containerd[1817]: 2025-08-13 08:42:46.386 [INFO][7309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a" Aug 13 08:42:46.387403 containerd[1817]: time="2025-08-13T08:42:46.387352109Z" level=info msg="TearDown network for sandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\" successfully" Aug 13 08:42:46.388796 containerd[1817]: time="2025-08-13T08:42:46.388756019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 08:42:46.388796 containerd[1817]: time="2025-08-13T08:42:46.388787184Z" level=info msg="RemovePodSandbox \"e416e69fadde2cfca41c3dc96198cee69f1599ba5a6eb3af1449fe542105ba8a\" returns successfully" Aug 13 08:42:46.389077 containerd[1817]: time="2025-08-13T08:42:46.389027343Z" level=info msg="StopPodSandbox for \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\"" Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.406 [WARNING][7351] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0", GenerateName:"calico-apiserver-658b4bf6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d73c1f7-7407-44b3-82c7-2812b426db1e", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658b4bf6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a", Pod:"calico-apiserver-658b4bf6b-bvb5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif43ebe970f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.406 [INFO][7351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.406 [INFO][7351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" iface="eth0" netns="" Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.406 [INFO][7351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.406 [INFO][7351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.416 [INFO][7366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.416 [INFO][7366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.416 [INFO][7366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.420 [WARNING][7366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.420 [INFO][7366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.422 [INFO][7366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.423448 containerd[1817]: 2025-08-13 08:42:46.422 [INFO][7351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:46.423772 containerd[1817]: time="2025-08-13T08:42:46.423448658Z" level=info msg="TearDown network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\" successfully" Aug 13 08:42:46.423772 containerd[1817]: time="2025-08-13T08:42:46.423465870Z" level=info msg="StopPodSandbox for \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\" returns successfully" Aug 13 08:42:46.423772 containerd[1817]: time="2025-08-13T08:42:46.423735863Z" level=info msg="RemovePodSandbox for \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\"" Aug 13 08:42:46.423772 containerd[1817]: time="2025-08-13T08:42:46.423752126Z" level=info msg="Forcibly stopping sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\"" Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.441 [WARNING][7391] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0", GenerateName:"calico-apiserver-658b4bf6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d73c1f7-7407-44b3-82c7-2812b426db1e", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 8, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"658b4bf6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-711ae8cc9f", ContainerID:"827c11331a1e3e00eb6e50f5802accfd3075fd684feb49abad3ffaece1698d0a", Pod:"calico-apiserver-658b4bf6b-bvb5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif43ebe970f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.441 [INFO][7391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.441 [INFO][7391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" iface="eth0" netns="" Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.441 [INFO][7391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.441 [INFO][7391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.452 [INFO][7406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.452 [INFO][7406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.452 [INFO][7406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.455 [WARNING][7406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.455 [INFO][7406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" HandleID="k8s-pod-network.673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Workload="ci--4081.3.5--a--711ae8cc9f-k8s-calico--apiserver--658b4bf6b--bvb5g-eth0" Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.456 [INFO][7406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 08:42:46.458121 containerd[1817]: 2025-08-13 08:42:46.457 [INFO][7391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4" Aug 13 08:42:46.458427 containerd[1817]: time="2025-08-13T08:42:46.458122275Z" level=info msg="TearDown network for sandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\" successfully" Aug 13 08:42:46.460326 containerd[1817]: time="2025-08-13T08:42:46.460281496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 08:42:46.460326 containerd[1817]: time="2025-08-13T08:42:46.460313091Z" level=info msg="RemovePodSandbox \"673530bf909043fd4536f3f3908437d82a9912b2dfb9a223a04efdb9ff95cbc4\" returns successfully" Aug 13 08:42:52.274790 kubelet[3075]: I0813 08:42:52.274708 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:43:08.994307 kubelet[3075]: I0813 08:43:08.994259 3075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 08:44:09.740686 update_engine[1812]: I20250813 08:44:09.740545 1812 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 08:44:09.740686 update_engine[1812]: I20250813 08:44:09.740646 1812 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 08:44:09.741835 update_engine[1812]: I20250813 08:44:09.741039 1812 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 08:44:09.742258 update_engine[1812]: I20250813 08:44:09.742194 1812 omaha_request_params.cc:62] Current group set to lts Aug 13 08:44:09.742512 update_engine[1812]: I20250813 08:44:09.742422 1812 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 08:44:09.742512 update_engine[1812]: I20250813 08:44:09.742455 1812 update_attempter.cc:643] Scheduling an action processor start. Aug 13 08:44:09.742512 update_engine[1812]: I20250813 08:44:09.742496 1812 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 08:44:09.742866 update_engine[1812]: I20250813 08:44:09.742569 1812 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 08:44:09.742866 update_engine[1812]: I20250813 08:44:09.742733 1812 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 08:44:09.742866 update_engine[1812]: I20250813 08:44:09.742763 1812 omaha_request_action.cc:272] Request: Aug 13 08:44:09.742866 update_engine[1812]: Aug 13 08:44:09.742866 update_engine[1812]: Aug 13 08:44:09.742866 update_engine[1812]: Aug 13 08:44:09.742866 update_engine[1812]: Aug 13 08:44:09.742866 update_engine[1812]: Aug 13 08:44:09.742866 update_engine[1812]: Aug 13 08:44:09.742866 update_engine[1812]: Aug 13 08:44:09.742866 update_engine[1812]: Aug 13 08:44:09.742866 update_engine[1812]: I20250813 08:44:09.742781 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 08:44:09.743842 locksmithd[1846]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 08:44:09.746189 update_engine[1812]: I20250813 08:44:09.746153 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 08:44:09.746421 update_engine[1812]: I20250813 08:44:09.746380 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 08:44:09.747443 update_engine[1812]: E20250813 08:44:09.747401 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 08:44:09.747443 update_engine[1812]: I20250813 08:44:09.747433 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 08:44:19.660710 update_engine[1812]: I20250813 08:44:19.660450 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 08:44:19.661661 update_engine[1812]: I20250813 08:44:19.661021 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 08:44:19.661661 update_engine[1812]: I20250813 08:44:19.661590 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 08:44:19.662445 update_engine[1812]: E20250813 08:44:19.662338 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 08:44:19.662631 update_engine[1812]: I20250813 08:44:19.662480 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 08:44:29.660771 update_engine[1812]: I20250813 08:44:29.660599 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 08:44:29.661775 update_engine[1812]: I20250813 08:44:29.661207 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 08:44:29.661775 update_engine[1812]: I20250813 08:44:29.661736 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 08:44:29.662398 update_engine[1812]: E20250813 08:44:29.662297 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 08:44:29.662584 update_engine[1812]: I20250813 08:44:29.662425 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 08:44:39.660846 update_engine[1812]: I20250813 08:44:39.660671 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 08:44:39.661829 update_engine[1812]: I20250813 08:44:39.661295 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 08:44:39.661948 update_engine[1812]: I20250813 08:44:39.661828 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 08:44:39.662726 update_engine[1812]: E20250813 08:44:39.662616 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 08:44:39.662912 update_engine[1812]: I20250813 08:44:39.662755 1812 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 08:44:39.662912 update_engine[1812]: I20250813 08:44:39.662786 1812 omaha_request_action.cc:617] Omaha request response: Aug 13 08:44:39.663129 update_engine[1812]: E20250813 08:44:39.662948 1812 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 08:44:39.663129 update_engine[1812]: I20250813 08:44:39.663003 1812 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 08:44:39.663129 update_engine[1812]: I20250813 08:44:39.663023 1812 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 08:44:39.663129 update_engine[1812]: I20250813 08:44:39.663039 1812 update_attempter.cc:306] Processing Done. Aug 13 08:44:39.663129 update_engine[1812]: E20250813 08:44:39.663072 1812 update_attempter.cc:619] Update failed. Aug 13 08:44:39.663129 update_engine[1812]: I20250813 08:44:39.663089 1812 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 08:44:39.663129 update_engine[1812]: I20250813 08:44:39.663105 1812 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 08:44:39.663129 update_engine[1812]: I20250813 08:44:39.663122 1812 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 08:44:39.663835 update_engine[1812]: I20250813 08:44:39.663307 1812 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 08:44:39.663835 update_engine[1812]: I20250813 08:44:39.663374 1812 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 08:44:39.663835 update_engine[1812]: I20250813 08:44:39.663394 1812 omaha_request_action.cc:272] Request: Aug 13 08:44:39.663835 update_engine[1812]: Aug 13 08:44:39.663835 update_engine[1812]: Aug 13 08:44:39.663835 update_engine[1812]: Aug 13 08:44:39.663835 update_engine[1812]: Aug 13 08:44:39.663835 update_engine[1812]: Aug 13 08:44:39.663835 update_engine[1812]: Aug 13 08:44:39.663835 update_engine[1812]: I20250813 08:44:39.663411 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 08:44:39.663835 update_engine[1812]: I20250813 08:44:39.663820 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 08:44:39.664760 update_engine[1812]: I20250813 08:44:39.664259 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 08:44:39.664861 locksmithd[1846]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 08:44:39.665536 update_engine[1812]: E20250813 08:44:39.664979 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 08:44:39.665536 update_engine[1812]: I20250813 08:44:39.665109 1812 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 08:44:39.665536 update_engine[1812]: I20250813 08:44:39.665138 1812 omaha_request_action.cc:617] Omaha request response: Aug 13 08:44:39.665536 update_engine[1812]: I20250813 08:44:39.665156 1812 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 08:44:39.665536 update_engine[1812]: I20250813 08:44:39.665171 1812 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 08:44:39.665536 update_engine[1812]: I20250813 08:44:39.665217 1812 update_attempter.cc:306] Processing Done. Aug 13 08:44:39.665536 update_engine[1812]: I20250813 08:44:39.665235 1812 update_attempter.cc:310] Error event sent. Aug 13 08:44:39.665536 update_engine[1812]: I20250813 08:44:39.665261 1812 update_check_scheduler.cc:74] Next update check in 40m15s Aug 13 08:44:39.666227 locksmithd[1846]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 08:52:55.596930 systemd[1]: Started sshd@9-147.75.71.95:22-93.123.109.185:45572.service - OpenSSH per-connection server daemon (93.123.109.185:45572). Aug 13 08:52:56.191393 sshd[9862]: Invalid user oneadmin from 93.123.109.185 port 45572 Aug 13 08:52:56.330497 sshd[9862]: Connection closed by invalid user oneadmin 93.123.109.185 port 45572 [preauth] Aug 13 08:52:56.333989 systemd[1]: sshd@9-147.75.71.95:22-93.123.109.185:45572.service: Deactivated successfully.